Test Report: Hyper-V_Windows 19008

                    
                      a618818e4540e3b7209a51bdf46a3b81113887e7:2024-06-03:34738
                    
                

Test fail (15/200)

x
+
TestAddons/parallel/Registry (73.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 22.0363ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nx4pc" [54d57b97-dec3-4312-88d3-311d92254848] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0203019s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5xsp7" [20944f26-2fcb-41fa-a385-e6259d737c86] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0086817s
addons_test.go:342: (dbg) Run:  kubectl --context addons-402100 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-402100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-402100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.3337318s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 ip: (2.6347712s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0603 03:47:27.513176    7092 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-402100 ip"
2024/06/03 03:47:30 [DEBUG] GET http://172.17.90.102:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable registry --alsologtostderr -v=1: (15.5661873s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-402100 -n addons-402100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-402100 -n addons-402100: (13.4699477s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 logs -n 25: (9.5446614s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-448100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | -p download-only-448100              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| delete  | -p download-only-448100              | download-only-448100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| start   | -o=json --download-only              | download-only-435800 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | -p download-only-435800              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| delete  | -p download-only-435800              | download-only-435800 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| delete  | -p download-only-448100              | download-only-448100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| delete  | -p download-only-435800              | download-only-435800 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| start   | --download-only -p                   | binary-mirror-143700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | binary-mirror-143700                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:56079               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-143700              | binary-mirror-143700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| addons  | disable dashboard -p                 | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | addons-402100                        |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | addons-402100                        |                      |                   |         |                     |                     |
	| start   | -p addons-402100 --wait=true         | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:47 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT | 03 Jun 24 03:47 PDT |
	|         | -p addons-402100                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT | 03 Jun 24 03:47 PDT |
	|         | -p addons-402100                     |                      |                   |         |                     |                     |
	| addons  | addons-402100 addons disable         | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT | 03 Jun 24 03:47 PDT |
	|         | helm-tiller --alsologtostderr        |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| ip      | addons-402100 ip                     | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT | 03 Jun 24 03:47 PDT |
	| addons  | addons-402100 addons disable         | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT | 03 Jun 24 03:47 PDT |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | addons-402100 addons                 | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT | 03 Jun 24 03:47 PDT |
	|         | disable metrics-server               |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-402100        | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:47 PDT |                     |
	|         | addons-402100                        |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 03:39:55
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 03:39:55.683515    5812 out.go:291] Setting OutFile to fd 688 ...
	I0603 03:39:55.684174    5812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 03:39:55.684174    5812 out.go:304] Setting ErrFile to fd 624...
	I0603 03:39:55.684174    5812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 03:39:55.708245    5812 out.go:298] Setting JSON to false
	I0603 03:39:55.711184    5812 start.go:129] hostinfo: {"hostname":"minikube1","uptime":423,"bootTime":1717410772,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 03:39:55.711184    5812 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 03:39:55.713952    5812 out.go:177] * [addons-402100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 03:39:55.719708    5812 notify.go:220] Checking for updates...
	I0603 03:39:55.719917    5812 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 03:39:55.723349    5812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 03:39:55.725768    5812 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 03:39:55.730078    5812 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 03:39:55.730719    5812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 03:39:55.733427    5812 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 03:40:00.927042    5812 out.go:177] * Using the hyperv driver based on user configuration
	I0603 03:40:00.930676    5812 start.go:297] selected driver: hyperv
	I0603 03:40:00.930676    5812 start.go:901] validating driver "hyperv" against <nil>
	I0603 03:40:00.930676    5812 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 03:40:00.979525    5812 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 03:40:00.981182    5812 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 03:40:00.981182    5812 cni.go:84] Creating CNI manager for ""
	I0603 03:40:00.981182    5812 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 03:40:00.981182    5812 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 03:40:00.981182    5812 start.go:340] cluster config:
	{Name:addons-402100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-402100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 03:40:00.982255    5812 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 03:40:00.986042    5812 out.go:177] * Starting "addons-402100" primary control-plane node in "addons-402100" cluster
	I0603 03:40:00.989862    5812 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 03:40:00.989862    5812 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 03:40:00.989862    5812 cache.go:56] Caching tarball of preloaded images
	I0603 03:40:00.990592    5812 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 03:40:00.990592    5812 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 03:40:00.991299    5812 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\config.json ...
	I0603 03:40:00.991299    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\config.json: {Name:mkff8adf05b700e486c20f348625635379fc97b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:40:00.991960    5812 start.go:360] acquireMachinesLock for addons-402100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 03:40:00.993076    5812 start.go:364] duration metric: took 1.1156ms to acquireMachinesLock for "addons-402100"
	I0603 03:40:00.993143    5812 start.go:93] Provisioning new machine with config: &{Name:addons-402100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-402100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 03:40:00.993143    5812 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 03:40:00.995455    5812 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0603 03:40:00.995455    5812 start.go:159] libmachine.API.Create for "addons-402100" (driver="hyperv")
	I0603 03:40:00.995455    5812 client.go:168] LocalClient.Create starting
	I0603 03:40:00.995455    5812 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 03:40:01.452233    5812 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 03:40:01.619324    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 03:40:03.607436    5812 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 03:40:03.607436    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:03.607709    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 03:40:05.221551    5812 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 03:40:05.221770    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:05.221770    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 03:40:06.581229    5812 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 03:40:06.589276    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:06.590149    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 03:40:10.179970    5812 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 03:40:10.179970    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:10.195265    5812 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 03:40:10.713160    5812 main.go:141] libmachine: Creating SSH key...
	I0603 03:40:10.918298    5812 main.go:141] libmachine: Creating VM...
	I0603 03:40:10.918298    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 03:40:13.674567    5812 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 03:40:13.674567    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:13.683647    5812 main.go:141] libmachine: Using switch "Default Switch"
	I0603 03:40:13.683746    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 03:40:15.360738    5812 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 03:40:15.369890    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:15.369890    5812 main.go:141] libmachine: Creating VHD
	I0603 03:40:15.369890    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 03:40:19.044486    5812 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4D78B633-4071-4113-AE46-A95B43FF559B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 03:40:19.044486    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:19.055279    5812 main.go:141] libmachine: Writing magic tar header
	I0603 03:40:19.055379    5812 main.go:141] libmachine: Writing SSH key tar header
	I0603 03:40:19.065524    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 03:40:22.104351    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:22.104351    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:22.114946    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\disk.vhd' -SizeBytes 20000MB
	I0603 03:40:24.562602    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:24.571557    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:24.571557    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-402100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0603 03:40:28.199115    5812 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-402100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 03:40:28.199115    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:28.199115    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-402100 -DynamicMemoryEnabled $false
	I0603 03:40:30.346754    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:30.346754    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:30.356198    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-402100 -Count 2
	I0603 03:40:32.390094    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:32.390094    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:32.390198    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-402100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\boot2docker.iso'
	I0603 03:40:34.835901    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:34.835901    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:34.836032    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-402100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\disk.vhd'
	I0603 03:40:37.389745    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:37.389745    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:37.389854    5812 main.go:141] libmachine: Starting VM...
	I0603 03:40:37.390028    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-402100
	I0603 03:40:40.446165    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:40.446165    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:40.446165    5812 main.go:141] libmachine: Waiting for host to start...
	I0603 03:40:40.446165    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:40:42.699394    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:40:42.699633    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:42.699713    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:40:45.199912    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:45.199912    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:46.211279    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:40:48.383041    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:40:48.388717    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:48.388717    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:40:50.840130    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:50.840223    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:51.856336    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:40:53.968280    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:40:53.968523    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:53.968631    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:40:56.410909    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:40:56.410909    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:57.422419    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:40:59.506563    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:40:59.507656    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:40:59.507656    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:01.880489    5812 main.go:141] libmachine: [stdout =====>] : 
	I0603 03:41:01.880489    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:02.880943    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:05.027936    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:05.027936    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:05.027936    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:07.452155    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:07.452155    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:07.452155    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:09.484887    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:09.484887    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:09.496153    5812 machine.go:94] provisionDockerMachine start ...
	I0603 03:41:09.496153    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:11.559580    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:11.559580    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:11.563396    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:14.002571    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:14.002571    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:14.018689    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:41:14.026943    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:41:14.026943    5812 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 03:41:14.150178    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 03:41:14.150289    5812 buildroot.go:166] provisioning hostname "addons-402100"
	I0603 03:41:14.150432    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:16.174475    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:16.184763    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:16.184763    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:18.585985    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:18.585985    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:18.603484    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:41:18.604149    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:41:18.604149    5812 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-402100 && echo "addons-402100" | sudo tee /etc/hostname
	I0603 03:41:18.747310    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-402100
	
	I0603 03:41:18.747395    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:20.756317    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:20.756317    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:20.767319    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:23.205932    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:23.205932    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:23.211213    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:41:23.211213    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:41:23.211213    5812 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-402100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-402100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-402100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 03:41:23.347968    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 03:41:23.348083    5812 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 03:41:23.348159    5812 buildroot.go:174] setting up certificates
	I0603 03:41:23.348201    5812 provision.go:84] configureAuth start
	I0603 03:41:23.348281    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:25.344509    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:25.355115    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:25.355180    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:27.771651    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:27.771651    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:27.771651    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:29.751302    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:29.751302    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:29.751425    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:32.069281    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:32.069281    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:32.081667    5812 provision.go:143] copyHostCerts
	I0603 03:41:32.082565    5812 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 03:41:32.084215    5812 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 03:41:32.085629    5812 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 03:41:32.087386    5812 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-402100 san=[127.0.0.1 172.17.90.102 addons-402100 localhost minikube]
	I0603 03:41:32.513059    5812 provision.go:177] copyRemoteCerts
	I0603 03:41:32.523669    5812 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 03:41:32.523669    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:34.544940    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:34.544940    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:34.544940    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:36.889087    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:36.889087    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:36.899997    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:41:37.002086    5812 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4783823s)
	I0603 03:41:37.002086    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 03:41:37.044101    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 03:41:37.097304    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 03:41:37.147170    5812 provision.go:87] duration metric: took 13.798961s to configureAuth
	I0603 03:41:37.147170    5812 buildroot.go:189] setting minikube options for container-runtime
	I0603 03:41:37.147958    5812 config.go:182] Loaded profile config "addons-402100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 03:41:37.147958    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:39.184790    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:39.184790    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:39.194395    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:41.556072    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:41.556072    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:41.561567    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:41:41.561684    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:41:41.561684    5812 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 03:41:41.680853    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 03:41:41.681070    5812 buildroot.go:70] root file system type: tmpfs
	I0603 03:41:41.681442    5812 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 03:41:41.681588    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:43.645345    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:43.645345    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:43.655369    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:45.972772    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:45.972772    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:45.990004    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:41:45.990259    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:41:45.990259    5812 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 03:41:46.138742    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 03:41:46.138742    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:48.155771    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:48.155771    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:48.155934    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:50.500927    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:50.511000    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:50.517225    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:41:50.517758    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:41:50.517758    5812 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 03:41:52.536181    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 03:41:52.536181    5812 machine.go:97] duration metric: took 43.0400028s to provisionDockerMachine
	I0603 03:41:52.536181    5812 client.go:171] duration metric: took 1m51.5406654s to LocalClient.Create
	I0603 03:41:52.536807    5812 start.go:167] duration metric: took 1m51.5412663s to libmachine.API.Create "addons-402100"
	I0603 03:41:52.536909    5812 start.go:293] postStartSetup for "addons-402100" (driver="hyperv")
	I0603 03:41:52.536909    5812 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 03:41:52.549858    5812 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 03:41:52.549858    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:54.560752    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:54.571526    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:54.571526    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:41:56.930777    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:41:56.930777    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:56.942263    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:41:57.039285    5812 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4891745s)
	I0603 03:41:57.051054    5812 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 03:41:57.059317    5812 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 03:41:57.059427    5812 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 03:41:57.059943    5812 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 03:41:57.060180    5812 start.go:296] duration metric: took 4.523268s for postStartSetup
	I0603 03:41:57.063363    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:41:59.077857    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:41:59.087044    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:41:59.087159    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:42:01.411957    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:42:01.411957    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:01.422057    5812 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\config.json ...
	I0603 03:42:01.424993    5812 start.go:128] duration metric: took 2m0.4317828s to createHost
	I0603 03:42:01.424993    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:42:03.462376    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:42:03.462376    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:03.462376    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:42:05.880778    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:42:05.891121    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:05.896699    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:42:05.897287    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:42:05.897287    5812 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 03:42:06.015701    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717411326.005447493
	
	I0603 03:42:06.015860    5812 fix.go:216] guest clock: 1717411326.005447493
	I0603 03:42:06.015860    5812 fix.go:229] Guest: 2024-06-03 03:42:06.005447493 -0700 PDT Remote: 2024-06-03 03:42:01.4249931 -0700 PDT m=+125.828670801 (delta=4.580454393s)
	I0603 03:42:06.016019    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:42:08.005368    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:42:08.005368    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:08.005368    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:42:10.453236    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:42:10.453236    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:10.458070    5812 main.go:141] libmachine: Using SSH client type: native
	I0603 03:42:10.458591    5812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.102 22 <nil> <nil>}
	I0603 03:42:10.458591    5812 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717411326
	I0603 03:42:10.591490    5812 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 10:42:06 UTC 2024
	
	I0603 03:42:10.591603    5812 fix.go:236] clock set: Mon Jun  3 10:42:06 UTC 2024
	 (err=<nil>)
	I0603 03:42:10.591603    5812 start.go:83] releasing machines lock for "addons-402100", held for 2m9.5983877s
	I0603 03:42:10.591950    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:42:12.644057    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:42:12.644057    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:12.655213    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:42:15.083090    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:42:15.093284    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:15.098651    5812 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 03:42:15.099424    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:42:15.113942    5812 ssh_runner.go:195] Run: cat /version.json
	I0603 03:42:15.113942    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:42:17.216529    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:42:17.216804    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:17.216529    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:42:17.216932    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:17.216804    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:42:17.216932    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:42:19.786292    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:42:19.786292    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:19.786651    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:42:19.807710    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:42:19.807710    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:42:19.807710    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:42:19.940077    5812 ssh_runner.go:235] Completed: cat /version.json: (4.8253026s)
	I0603 03:42:19.940077    5812 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.841423s)
	I0603 03:42:19.954519    5812 ssh_runner.go:195] Run: systemctl --version
	I0603 03:42:19.967878    5812 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 03:42:19.982280    5812 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 03:42:19.993237    5812 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 03:42:20.021404    5812 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 03:42:20.021541    5812 start.go:494] detecting cgroup driver to use...
	I0603 03:42:20.021700    5812 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 03:42:20.064221    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 03:42:20.094940    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 03:42:20.113669    5812 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 03:42:20.124969    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 03:42:20.154981    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 03:42:20.185033    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 03:42:20.215394    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 03:42:20.246203    5812 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 03:42:20.278999    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 03:42:20.309941    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 03:42:20.337758    5812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 03:42:20.369129    5812 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 03:42:20.403872    5812 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 03:42:20.432338    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:42:20.615858    5812 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 03:42:20.640013    5812 start.go:494] detecting cgroup driver to use...
	I0603 03:42:20.658477    5812 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 03:42:20.688724    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 03:42:20.724879    5812 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 03:42:20.767410    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 03:42:20.803807    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 03:42:20.841755    5812 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 03:42:20.902427    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 03:42:20.925500    5812 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 03:42:20.975299    5812 ssh_runner.go:195] Run: which cri-dockerd
	I0603 03:42:20.991296    5812 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 03:42:21.009365    5812 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 03:42:21.049813    5812 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 03:42:21.235955    5812 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 03:42:21.404366    5812 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 03:42:21.404637    5812 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 03:42:21.447775    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:42:21.612429    5812 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 03:42:24.086810    5812 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4743795s)
	I0603 03:42:24.100935    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 03:42:24.134635    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 03:42:24.168044    5812 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 03:42:24.352799    5812 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 03:42:24.528204    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:42:24.718559    5812 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 03:42:24.761599    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 03:42:24.795654    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:42:24.984180    5812 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 03:42:25.095374    5812 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 03:42:25.108203    5812 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 03:42:25.117192    5812 start.go:562] Will wait 60s for crictl version
	I0603 03:42:25.128609    5812 ssh_runner.go:195] Run: which crictl
	I0603 03:42:25.144735    5812 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 03:42:25.209138    5812 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 03:42:25.220068    5812 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 03:42:25.264602    5812 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 03:42:25.299179    5812 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 03:42:25.299441    5812 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 03:42:25.304042    5812 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 03:42:25.304042    5812 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 03:42:25.304042    5812 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 03:42:25.304042    5812 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 03:42:25.306573    5812 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 03:42:25.306573    5812 ip.go:210] interface addr: 172.17.80.1/20
	I0603 03:42:25.319690    5812 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 03:42:25.325737    5812 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 03:42:25.347060    5812 kubeadm.go:877] updating cluster {Name:addons-402100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-402100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.102 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 03:42:25.347060    5812 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 03:42:25.361423    5812 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 03:42:25.382703    5812 docker.go:685] Got preloaded images: 
	I0603 03:42:25.382703    5812 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 03:42:25.393299    5812 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 03:42:25.422752    5812 ssh_runner.go:195] Run: which lz4
	I0603 03:42:25.441465    5812 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 03:42:25.447906    5812 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 03:42:25.448000    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 03:42:27.075159    5812 docker.go:649] duration metric: took 1.6460935s to copy over tarball
	I0603 03:42:27.086387    5812 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 03:42:32.698169    5812 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.6116819s)
	I0603 03:42:32.698279    5812 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 03:42:32.760276    5812 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 03:42:32.782636    5812 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 03:42:32.826402    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:42:33.004765    5812 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 03:42:38.727384    5812 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.7226158s)
	I0603 03:42:38.737815    5812 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 03:42:38.758067    5812 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 03:42:38.758067    5812 cache_images.go:84] Images are preloaded, skipping loading
	I0603 03:42:38.758067    5812 kubeadm.go:928] updating node { 172.17.90.102 8443 v1.30.1 docker true true} ...
	I0603 03:42:38.759361    5812 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-402100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.90.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-402100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 03:42:38.768481    5812 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 03:42:38.798056    5812 cni.go:84] Creating CNI manager for ""
	I0603 03:42:38.798127    5812 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 03:42:38.798127    5812 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 03:42:38.798207    5812 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.90.102 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-402100 NodeName:addons-402100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.90.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.90.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 03:42:38.798566    5812 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.90.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-402100"
	  kubeletExtraArgs:
	    node-ip: 172.17.90.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.90.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 03:42:38.812176    5812 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 03:42:38.829065    5812 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 03:42:38.840283    5812 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 03:42:38.856973    5812 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 03:42:38.890345    5812 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 03:42:38.913317    5812 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0603 03:42:38.958001    5812 ssh_runner.go:195] Run: grep 172.17.90.102	control-plane.minikube.internal$ /etc/hosts
	I0603 03:42:38.962380    5812 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.90.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 03:42:39.000020    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:42:39.180499    5812 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 03:42:39.217944    5812 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100 for IP: 172.17.90.102
	I0603 03:42:39.218059    5812 certs.go:194] generating shared ca certs ...
	I0603 03:42:39.218103    5812 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.218695    5812 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 03:42:39.397820    5812 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0603 03:42:39.397820    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.404108    5812 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0603 03:42:39.404108    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.406238    5812 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 03:42:39.687717    5812 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0603 03:42:39.687717    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.689194    5812 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0603 03:42:39.689194    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.690939    5812 certs.go:256] generating profile certs ...
	I0603 03:42:39.691530    5812 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.key
	I0603 03:42:39.691530    5812 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt with IP's: []
	I0603 03:42:39.780327    5812 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt ...
	I0603 03:42:39.780327    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: {Name:mk4dd78d05282e414a8b2832c68521d9aca046a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.781445    5812 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.key ...
	I0603 03:42:39.781445    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.key: {Name:mkf4d3d8f0213455632e9cda0596111f11add5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:39.782307    5812 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.key.59a0d170
	I0603 03:42:39.783499    5812 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.crt.59a0d170 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.90.102]
	I0603 03:42:40.008485    5812 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.crt.59a0d170 ...
	I0603 03:42:40.008485    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.crt.59a0d170: {Name:mkb06b6471bd35ced33e98d63cdfb16d576e5fe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:40.016934    5812 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.key.59a0d170 ...
	I0603 03:42:40.016934    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.key.59a0d170: {Name:mkc1757ca66279d79bf15d2f2ee0cda4ec1eb894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:40.018569    5812 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.crt.59a0d170 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.crt
	I0603 03:42:40.029401    5812 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.key.59a0d170 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.key
	I0603 03:42:40.031275    5812 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.key
	I0603 03:42:40.031275    5812 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.crt with IP's: []
	I0603 03:42:40.433107    5812 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.crt ...
	I0603 03:42:40.433107    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.crt: {Name:mk2877284accefb56c20ee0bf75840f66ea3074d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:40.434811    5812 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.key ...
	I0603 03:42:40.434811    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.key: {Name:mk86c244ce6d3edf82c6e6b69652f0df3da24a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:42:40.443639    5812 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 03:42:40.447167    5812 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 03:42:40.447332    5812 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 03:42:40.447332    5812 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 03:42:40.448110    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 03:42:40.495965    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 03:42:40.541094    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 03:42:40.582591    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 03:42:40.624191    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 03:42:40.665547    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 03:42:40.705047    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 03:42:40.738471    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 03:42:40.779836    5812 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 03:42:40.821479    5812 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 03:42:40.864104    5812 ssh_runner.go:195] Run: openssl version
	I0603 03:42:40.887118    5812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 03:42:40.917890    5812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 03:42:40.924583    5812 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 03:42:40.935113    5812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 03:42:40.958987    5812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 03:42:40.989286    5812 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 03:42:40.992160    5812 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 03:42:40.992160    5812 kubeadm.go:391] StartCluster: {Name:addons-402100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-402100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.102 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 03:42:40.997930    5812 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 03:42:41.035497    5812 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 03:42:41.061077    5812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 03:42:41.090438    5812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 03:42:41.093848    5812 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 03:42:41.106910    5812 kubeadm.go:156] found existing configuration files:
	
	I0603 03:42:41.121419    5812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 03:42:41.128483    5812 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 03:42:41.146618    5812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 03:42:41.175732    5812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 03:42:41.178614    5812 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 03:42:41.203245    5812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 03:42:41.231130    5812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 03:42:41.247446    5812 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 03:42:41.258054    5812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 03:42:41.284374    5812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 03:42:41.293322    5812 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 03:42:41.314230    5812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 03:42:41.317076    5812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 03:42:41.578065    5812 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 03:42:54.516308    5812 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 03:42:54.516556    5812 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 03:42:54.516848    5812 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 03:42:54.517165    5812 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 03:42:54.517470    5812 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 03:42:54.517541    5812 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 03:42:54.520827    5812 out.go:204]   - Generating certificates and keys ...
	I0603 03:42:54.521082    5812 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 03:42:54.521385    5812 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 03:42:54.521385    5812 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 03:42:54.521385    5812 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 03:42:54.521917    5812 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 03:42:54.522049    5812 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 03:42:54.522049    5812 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 03:42:54.522049    5812 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-402100 localhost] and IPs [172.17.90.102 127.0.0.1 ::1]
	I0603 03:42:54.522049    5812 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 03:42:54.523019    5812 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-402100 localhost] and IPs [172.17.90.102 127.0.0.1 ::1]
	I0603 03:42:54.523019    5812 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 03:42:54.523019    5812 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 03:42:54.523019    5812 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 03:42:54.523558    5812 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 03:42:54.523830    5812 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 03:42:54.523857    5812 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 03:42:54.523857    5812 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 03:42:54.523857    5812 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 03:42:54.523857    5812 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 03:42:54.524415    5812 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 03:42:54.524415    5812 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 03:42:54.528076    5812 out.go:204]   - Booting up control plane ...
	I0603 03:42:54.528248    5812 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 03:42:54.528248    5812 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 03:42:54.528248    5812 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 03:42:54.528922    5812 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 03:42:54.528922    5812 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 03:42:54.528922    5812 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 03:42:54.528922    5812 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 03:42:54.528922    5812 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 03:42:54.530119    5812 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001299352s
	I0603 03:42:54.530247    5812 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 03:42:54.530247    5812 kubeadm.go:309] [api-check] The API server is healthy after 6.501995682s
	I0603 03:42:54.530247    5812 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 03:42:54.530247    5812 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 03:42:54.530247    5812 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 03:42:54.530247    5812 kubeadm.go:309] [mark-control-plane] Marking the node addons-402100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 03:42:54.531830    5812 kubeadm.go:309] [bootstrap-token] Using token: gok7rh.oyvndix34psvd4o9
	I0603 03:42:54.534022    5812 out.go:204]   - Configuring RBAC rules ...
	I0603 03:42:54.534022    5812 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 03:42:54.534597    5812 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 03:42:54.534730    5812 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 03:42:54.534730    5812 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 03:42:54.534730    5812 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 03:42:54.534730    5812 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 03:42:54.534730    5812 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 03:42:54.534730    5812 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 03:42:54.534730    5812 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 03:42:54.534730    5812 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 03:42:54.534730    5812 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 03:42:54.534730    5812 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 03:42:54.534730    5812 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 03:42:54.534730    5812 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gok7rh.oyvndix34psvd4o9 \
	I0603 03:42:54.534730    5812 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 \
	I0603 03:42:54.534730    5812 kubeadm.go:309] 	--control-plane 
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 03:42:54.534730    5812 kubeadm.go:309] 
	I0603 03:42:54.534730    5812 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gok7rh.oyvndix34psvd4o9 \
	I0603 03:42:54.534730    5812 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 03:42:54.534730    5812 cni.go:84] Creating CNI manager for ""
	I0603 03:42:54.534730    5812 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 03:42:54.542230    5812 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 03:42:54.557812    5812 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 03:42:54.577782    5812 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 03:42:54.610048    5812 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 03:42:54.624706    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:54.625913    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-402100 minikube.k8s.io/updated_at=2024_06_03T03_42_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=addons-402100 minikube.k8s.io/primary=true
	I0603 03:42:54.628965    5812 ops.go:34] apiserver oom_adj: -16
	I0603 03:42:54.772795    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:55.273659    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:55.773997    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:56.271417    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:56.777357    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:57.283812    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:57.783510    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:58.282698    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:58.775268    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:59.282673    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:42:59.771942    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:00.276086    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:00.775901    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:01.275203    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:01.776845    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:02.269488    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:02.772965    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:03.274821    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:03.775815    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:04.274703    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:04.791545    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:05.273890    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:05.774987    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:06.273218    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:06.781922    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:07.285383    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:07.795138    5812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 03:43:07.889730    5812 kubeadm.go:1107] duration metric: took 13.2794137s to wait for elevateKubeSystemPrivileges
	W0603 03:43:07.889853    5812 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 03:43:07.889853    5812 kubeadm.go:393] duration metric: took 26.8976745s to StartCluster
	I0603 03:43:07.889887    5812 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:43:07.890196    5812 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 03:43:07.891088    5812 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:43:07.893156    5812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 03:43:07.893535    5812 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.90.102 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 03:43:07.897913    5812 out.go:177] * Verifying Kubernetes components...
	I0603 03:43:07.893870    5812 config.go:182] Loaded profile config "addons-402100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 03:43:07.893577    5812 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0603 03:43:07.903781    5812 addons.go:69] Setting yakd=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting metrics-server=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting helm-tiller=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon yakd=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting ingress=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon helm-tiller=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon ingress=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting ingress-dns=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon ingress-dns=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting cloud-spanner=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon cloud-spanner=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting gcp-auth=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting volcano=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon volcano=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting default-storageclass=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 mustload.go:65] Loading cluster: addons-402100
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon metrics-server=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-402100"
	I0603 03:43:07.904337    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.904337    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 addons.go:69] Setting registry=true in profile "addons-402100"
	I0603 03:43:07.904598    5812 addons.go:234] Setting addon registry=true in "addons-402100"
	I0603 03:43:07.904479    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.904598    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 addons.go:69] Setting inspektor-gadget=true in profile "addons-402100"
	I0603 03:43:07.904598    5812 addons.go:234] Setting addon inspektor-gadget=true in "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting storage-provisioner=true in profile "addons-402100"
	I0603 03:43:07.903781    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-402100"
	I0603 03:43:07.903781    5812 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-402100"
	I0603 03:43:07.905469    5812 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-402100"
	I0603 03:43:07.905525    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 addons.go:69] Setting volumesnapshots=true in profile "addons-402100"
	I0603 03:43:07.906083    5812 addons.go:234] Setting addon volumesnapshots=true in "addons-402100"
	I0603 03:43:07.906276    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.903781    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.904598    5812 config.go:182] Loaded profile config "addons-402100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 03:43:07.905142    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.907618    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.905142    5812 addons.go:234] Setting addon storage-provisioner=true in "addons-402100"
	I0603 03:43:07.907618    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:07.909054    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.909734    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.910811    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.911732    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.911884    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.912112    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.912776    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.913295    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.913424    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.913659    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.913794    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.913999    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.914442    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.914483    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.914518    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:07.924617    5812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 03:43:08.752840    5812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 03:43:08.898692    5812 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 03:43:10.819376    5812 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066535s)
	I0603 03:43:10.819376    5812 start.go:946] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0603 03:43:10.819376    5812 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9206826s)
	I0603 03:43:10.819376    5812 node_ready.go:35] waiting up to 6m0s for node "addons-402100" to be "Ready" ...
	I0603 03:43:11.194473    5812 node_ready.go:49] node "addons-402100" has status "Ready":"True"
	I0603 03:43:11.194473    5812 node_ready.go:38] duration metric: took 375.0963ms for node "addons-402100" to be "Ready" ...
	I0603 03:43:11.194473    5812 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 03:43:11.685238    5812 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:12.973315    5812 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-402100" context rescaled to 1 replicas
	I0603 03:43:13.687606    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:14.209794    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.209794    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.210390    5812 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-402100"
	I0603 03:43:14.214029    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:14.214029    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.665437    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.665437    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.673284    5812 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0603 03:43:14.679629    5812 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0603 03:43:14.683789    5812 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0603 03:43:14.696472    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.696472    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.703013    5812 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0603 03:43:14.720818    5812 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 03:43:14.720818    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0603 03:43:14.720818    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.720818    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.720818    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.730358    5812 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0603 03:43:14.738700    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.738700    5812 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 03:43:14.740040    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.740112    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0603 03:43:14.740112    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.741766    5812 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 03:43:14.747331    5812 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 03:43:14.747331    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 03:43:14.745384    5812 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0603 03:43:14.747331    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0603 03:43:14.747331    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.748880    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.773718    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.773718    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.781275    5812 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0603 03:43:14.784434    5812 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0603 03:43:14.784434    5812 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0603 03:43:14.784434    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.787276    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.787836    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.793757    5812 out.go:177]   - Using image docker.io/registry:2.8.3
	I0603 03:43:14.845531    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.850570    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.853910    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0603 03:43:14.855751    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0603 03:43:14.861788    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0603 03:43:14.876556    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0603 03:43:14.891343    5812 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0603 03:43:14.899231    5812 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0603 03:43:14.899231    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0603 03:43:14.899231    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.899231    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0603 03:43:14.919661    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0603 03:43:14.951158    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0603 03:43:14.951158    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0603 03:43:14.972613    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0603 03:43:14.972663    5812 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0603 03:43:14.972900    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:14.976141    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:14.976141    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:14.980495    5812 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0603 03:43:14.980495    5812 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 03:43:14.980495    5812 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 03:43:14.980495    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:15.070738    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:15.070738    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:15.102615    5812 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0603 03:43:15.124680    5812 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0603 03:43:15.124680    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0603 03:43:15.124680    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:15.152170    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:15.152170    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:15.160065    5812 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0603 03:43:15.163351    5812 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0603 03:43:15.163351    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0603 03:43:15.163351    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:15.180067    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:15.180067    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:15.183473    5812 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0603 03:43:15.193148    5812 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 03:43:15.206852    5812 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 03:43:15.229863    5812 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 03:43:15.229863    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0603 03:43:15.229863    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:15.254352    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:15.254352    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:15.269917    5812 addons.go:234] Setting addon default-storageclass=true in "addons-402100"
	I0603 03:43:15.269917    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:15.271657    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:15.739707    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:15.756593    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:15.780934    5812 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0603 03:43:15.800563    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:15.887361    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:15.887361    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:15.887361    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:17.788317    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:17.788317    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:17.788317    5812 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0603 03:43:17.798972    5812 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0603 03:43:17.798972    5812 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0603 03:43:17.798972    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:17.904163    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:18.588211    5812 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0603 03:43:18.588211    5812 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0603 03:43:18.588211    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:19.961459    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:20.685335    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:20.685335    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:20.692436    5812 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0603 03:43:20.716128    5812 out.go:177]   - Using image docker.io/busybox:stable
	I0603 03:43:20.717509    5812 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 03:43:20.717509    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0603 03:43:20.717509    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:20.977812    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:20.977812    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:20.978237    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.127675    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.142614    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.142614    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.176610    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.176610    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.176610    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.181631    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.181631    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.181631    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.210070    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.210070    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.210070    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.321482    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.322609    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.322676    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.345799    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.345799    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.345799    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.578296    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.578296    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.578296    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.594143    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.594143    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.594143    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:21.997205    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:21.997205    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:21.997205    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:22.049930    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:22.049930    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:22.049930    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:22.084731    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:22.520725    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:22.520725    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:22.521186    5812 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 03:43:22.521186    5812 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 03:43:22.521186    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:24.353316    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:24.561141    5812 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0603 03:43:24.561698    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:26.122839    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:26.122839    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:26.122839    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:26.568260    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:26.568260    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:26.568260    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:26.727332    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:27.106110    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:27.106110    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:27.106110    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:28.561058    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.561058    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.561058    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.608629    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.608675    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.608675    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.673435    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.673435    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.673435    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.714982    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.715565    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.715835    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.794072    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.794072    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.794072    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.878258    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.878529    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.878798    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.930808    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:28.930868    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:28.931221    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:28.935819    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0603 03:43:28.935881    5812 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0603 03:43:28.956555    5812 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 03:43:28.956626    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0603 03:43:29.015206    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:29.015206    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:29.015571    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:29.096982    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:29.097584    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:29.097638    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:29.110097    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0603 03:43:29.110097    5812 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0603 03:43:29.148883    5812 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0603 03:43:29.148883    5812 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0603 03:43:29.153870    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:29.153915    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:29.154177    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:29.191222    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:29.191508    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:29.191508    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:29.211127    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:29.246649    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 03:43:29.288456    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 03:43:29.318530    5812 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 03:43:29.318530    5812 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 03:43:29.341841    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0603 03:43:29.397260    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0603 03:43:29.397260    5812 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0603 03:43:29.405114    5812 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0603 03:43:29.405114    5812 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0603 03:43:29.477069    5812 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 03:43:29.477069    5812 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 03:43:29.482684    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 03:43:29.619891    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0603 03:43:29.619970    5812 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0603 03:43:29.662108    5812 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0603 03:43:29.662190    5812 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0603 03:43:29.697712    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:29.698047    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:29.698047    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:29.750747    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 03:43:29.777481    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 03:43:29.783650    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:29.783650    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:29.784407    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:29.815914    5812 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0603 03:43:29.815914    5812 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0603 03:43:29.858492    5812 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0603 03:43:29.858492    5812 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0603 03:43:29.966425    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0603 03:43:29.966492    5812 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0603 03:43:30.016613    5812 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0603 03:43:30.016677    5812 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0603 03:43:30.103459    5812 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0603 03:43:30.103520    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0603 03:43:30.236067    5812 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 03:43:30.236067    5812 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0603 03:43:30.256380    5812 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0603 03:43:30.256440    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0603 03:43:30.258917    5812 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0603 03:43:30.258917    5812 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0603 03:43:30.346107    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0603 03:43:30.423977    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0603 03:43:30.485501    5812 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0603 03:43:30.485501    5812 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0603 03:43:30.510973    5812 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0603 03:43:30.511080    5812 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0603 03:43:30.631004    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 03:43:30.777843    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:30.777843    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:30.778153    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:30.886519    5812 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 03:43:30.886519    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0603 03:43:30.918408    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.6717577s)
	I0603 03:43:30.922603    5812 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0603 03:43:30.922669    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0603 03:43:31.138897    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:31.138897    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:31.146027    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:31.217029    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:31.315479    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:31.315479    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:31.318633    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:31.328481    5812 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0603 03:43:31.328481    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0603 03:43:31.389796    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 03:43:31.668854    5812 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0603 03:43:31.668952    5812 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0603 03:43:31.781648    5812 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 03:43:31.781796    5812 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0603 03:43:32.280173    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:32.280173    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:32.284897    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:32.311035    5812 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0603 03:43:32.311067    5812 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0603 03:43:32.382318    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 03:43:32.418964    5812 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0603 03:43:32.418964    5812 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0603 03:43:32.541804    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:32.541868    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:32.541899    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:32.551490    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 03:43:32.772990    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.4844749s)
	I0603 03:43:32.805513    5812 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0603 03:43:32.805513    5812 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0603 03:43:32.833922    5812 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0603 03:43:32.833922    5812 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0603 03:43:33.087551    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 03:43:33.236323    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:33.316873    5812 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0603 03:43:33.317015    5812 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0603 03:43:33.400367    5812 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0603 03:43:33.400367    5812 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0603 03:43:33.457932    5812 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0603 03:43:33.835246    5812 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 03:43:33.835324    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0603 03:43:33.953350    5812 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0603 03:43:33.953416    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0603 03:43:34.043629    5812 addons.go:234] Setting addon gcp-auth=true in "addons-402100"
	I0603 03:43:34.043629    5812 host.go:66] Checking if "addons-402100" exists ...
	I0603 03:43:34.057351    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:34.884002    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0603 03:43:35.010824    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 03:43:36.397490    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:36.399394    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:36.412404    5812 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0603 03:43:36.412404    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-402100 ).state
	I0603 03:43:36.440481    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:38.787914    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:38.864666    5812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 03:43:38.864666    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:38.872948    5812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-402100 ).networkadapters[0]).ipaddresses[0]
	I0603 03:43:41.200829    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:41.679366    5812 main.go:141] libmachine: [stdout =====>] : 172.17.90.102
	
	I0603 03:43:41.679366    5812 main.go:141] libmachine: [stderr =====>] : 
	I0603 03:43:41.687556    5812 sshutil.go:53] new ssh client: &{IP:172.17.90.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-402100\id_rsa Username:docker}
	I0603 03:43:43.461748    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:45.717092    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:45.921658    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (16.5796923s)
	I0603 03:43:45.921658    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (16.4389628s)
	I0603 03:43:45.921658    5812 addons.go:475] Verifying addon ingress=true in "addons-402100"
	I0603 03:43:45.921658    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (16.1708998s)
	I0603 03:43:45.924949    5812 out.go:177] * Verifying ingress addon...
	I0603 03:43:45.921658    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (16.1441659s)
	I0603 03:43:45.925032    5812 addons.go:475] Verifying addon metrics-server=true in "addons-402100"
	I0603 03:43:45.921658    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (15.4976002s)
	I0603 03:43:45.929683    5812 addons.go:475] Verifying addon registry=true in "addons-402100"
	I0603 03:43:45.922273    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (15.2912575s)
	I0603 03:43:45.922414    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (14.5326076s)
	I0603 03:43:45.922489    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.5401617s)
	I0603 03:43:45.921658    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (15.5755399s)
	I0603 03:43:45.932203    5812 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0603 03:43:45.934897    5812 out.go:177] * Verifying registry addon...
	I0603 03:43:45.937798    5812 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0603 03:43:45.957866    5812 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0603 03:43:45.958304    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:46.008220    5812 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0603 03:43:46.008220    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:46.462868    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:46.482632    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:46.926347    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (14.3748474s)
	I0603 03:43:46.926347    5812 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-402100"
	I0603 03:43:46.926347    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.0416496s)
	I0603 03:43:46.926347    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.8387867s)
	I0603 03:43:46.930089    5812 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-402100 service yakd-dashboard -n yakd-dashboard
	
	I0603 03:43:46.934069    5812 out.go:177] * Verifying csi-hostpath-driver addon...
	I0603 03:43:46.926347    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.9155153s)
	I0603 03:43:46.927037    5812 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.5145535s)
	W0603 03:43:46.938537    5812 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 03:43:46.947911    5812 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 03:43:46.940195    5812 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0603 03:43:46.940195    5812 retry.go:31] will retry after 169.987883ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 03:43:46.958793    5812 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0603 03:43:46.959123    5812 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0603 03:43:46.959123    5812 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0603 03:43:46.999290    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:47.013892    5812 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0603 03:43:47.013958    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:47.019045    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:47.047329    5812 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0603 03:43:47.047329    5812 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0603 03:43:47.143147    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 03:43:47.165633    5812 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 03:43:47.165633    5812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0603 03:43:47.282130    5812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 03:43:47.456673    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:47.462546    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:47.464588    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:47.954288    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:47.955118    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:47.970263    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:48.207471    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:48.453754    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:48.456678    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:48.467805    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:48.947239    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:48.947239    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:48.963378    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:49.469746    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:49.469989    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:49.550322    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:49.839443    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.5572575s)
	I0603 03:43:49.839443    5812 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.6961808s)
	I0603 03:43:49.847777    5812 addons.go:475] Verifying addon gcp-auth=true in "addons-402100"
	I0603 03:43:49.886499    5812 out.go:177] * Verifying gcp-auth addon...
	I0603 03:43:49.900930    5812 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0603 03:43:49.921510    5812 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 03:43:49.960579    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:49.961870    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:49.964187    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:50.461918    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:50.462143    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:50.463557    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:50.772810    5812 pod_ready.go:102] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"False"
	I0603 03:43:50.963545    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:50.964427    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:50.970846    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:51.219876    5812 pod_ready.go:92] pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace has status "Ready":"True"
	I0603 03:43:51.219929    5812 pod_ready.go:81] duration metric: took 39.5346636s for pod "coredns-7db6d8ff4d-h2ptk" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.219929    5812 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qg9cj" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.224243    5812 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-qg9cj" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-qg9cj" not found
	I0603 03:43:51.224293    5812 pod_ready.go:81] duration metric: took 4.3633ms for pod "coredns-7db6d8ff4d-qg9cj" in "kube-system" namespace to be "Ready" ...
	E0603 03:43:51.224293    5812 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-qg9cj" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-qg9cj" not found
	I0603 03:43:51.224293    5812 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.233884    5812 pod_ready.go:92] pod "etcd-addons-402100" in "kube-system" namespace has status "Ready":"True"
	I0603 03:43:51.233884    5812 pod_ready.go:81] duration metric: took 9.5913ms for pod "etcd-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.233884    5812 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.244920    5812 pod_ready.go:92] pod "kube-apiserver-addons-402100" in "kube-system" namespace has status "Ready":"True"
	I0603 03:43:51.244920    5812 pod_ready.go:81] duration metric: took 11.0357ms for pod "kube-apiserver-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.244920    5812 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.258096    5812 pod_ready.go:92] pod "kube-controller-manager-addons-402100" in "kube-system" namespace has status "Ready":"True"
	I0603 03:43:51.258096    5812 pod_ready.go:81] duration metric: took 13.1768ms for pod "kube-controller-manager-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.258096    5812 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kxf2b" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.408582    5812 pod_ready.go:92] pod "kube-proxy-kxf2b" in "kube-system" namespace has status "Ready":"True"
	I0603 03:43:51.408582    5812 pod_ready.go:81] duration metric: took 150.4854ms for pod "kube-proxy-kxf2b" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.408582    5812 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.452426    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:51.452728    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:51.461478    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:51.803621    5812 pod_ready.go:92] pod "kube-scheduler-addons-402100" in "kube-system" namespace has status "Ready":"True"
	I0603 03:43:51.803621    5812 pod_ready.go:81] duration metric: took 395.039ms for pod "kube-scheduler-addons-402100" in "kube-system" namespace to be "Ready" ...
	I0603 03:43:51.803621    5812 pod_ready.go:38] duration metric: took 40.6091202s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 03:43:51.803621    5812 api_server.go:52] waiting for apiserver process to appear ...
	I0603 03:43:51.817273    5812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 03:43:51.844827    5812 api_server.go:72] duration metric: took 43.9510919s to wait for apiserver process to appear ...
	I0603 03:43:51.844928    5812 api_server.go:88] waiting for apiserver healthz status ...
	I0603 03:43:51.844928    5812 api_server.go:253] Checking apiserver healthz at https://172.17.90.102:8443/healthz ...
	I0603 03:43:51.865806    5812 api_server.go:279] https://172.17.90.102:8443/healthz returned 200:
	ok
	I0603 03:43:51.868385    5812 api_server.go:141] control plane version: v1.30.1
	I0603 03:43:51.868385    5812 api_server.go:131] duration metric: took 23.4574ms to wait for apiserver health ...
	I0603 03:43:51.868385    5812 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 03:43:51.961579    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:51.962504    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:51.971060    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:52.029784    5812 system_pods.go:59] 18 kube-system pods found
	I0603 03:43:52.029784    5812 system_pods.go:61] "coredns-7db6d8ff4d-h2ptk" [81fbdd3f-6d76-494a-a873-c8550d4bc33f] Running
	I0603 03:43:52.029784    5812 system_pods.go:61] "csi-hostpath-attacher-0" [f52f2eaa-9a48-47f0-82b3-9688f7368f8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0603 03:43:52.029784    5812 system_pods.go:61] "csi-hostpath-resizer-0" [4b98b4bd-14cd-4625-99c4-607609f8853e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0603 03:43:52.030310    5812 system_pods.go:61] "csi-hostpathplugin-689hw" [55555f52-e143-4c7a-844e-f54c2df2ecbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0603 03:43:52.030310    5812 system_pods.go:61] "etcd-addons-402100" [9125dcd8-709a-48ce-98ce-251a5dbedbf3] Running
	I0603 03:43:52.030310    5812 system_pods.go:61] "kube-apiserver-addons-402100" [d37a870b-3ca4-46ad-b9d8-434029382739] Running
	I0603 03:43:52.030345    5812 system_pods.go:61] "kube-controller-manager-addons-402100" [31e5d46c-ffb1-459e-9c3c-3b8abe301247] Running
	I0603 03:43:52.030345    5812 system_pods.go:61] "kube-ingress-dns-minikube" [c57681b6-419d-4da8-8f32-0349f299b259] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0603 03:43:52.030345    5812 system_pods.go:61] "kube-proxy-kxf2b" [9cac235c-a93a-4307-a98f-af0d87205244] Running
	I0603 03:43:52.030345    5812 system_pods.go:61] "kube-scheduler-addons-402100" [ef8d0d29-7291-4df0-becf-b61b26412a4e] Running
	I0603 03:43:52.030345    5812 system_pods.go:61] "metrics-server-c59844bb4-wmghb" [12d052b6-5e05-4727-ad22-af68e7eac41f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 03:43:52.030345    5812 system_pods.go:61] "nvidia-device-plugin-daemonset-wq5gk" [d4389b52-e6e3-4329-b22e-44f72dbfe971] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0603 03:43:52.030345    5812 system_pods.go:61] "registry-nx4pc" [54d57b97-dec3-4312-88d3-311d92254848] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0603 03:43:52.030435    5812 system_pods.go:61] "registry-proxy-5xsp7" [20944f26-2fcb-41fa-a385-e6259d737c86] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0603 03:43:52.030461    5812 system_pods.go:61] "snapshot-controller-745499f584-ddh8n" [b9f45d2c-caa1-4d94-86aa-31276a5bb156] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 03:43:52.030461    5812 system_pods.go:61] "snapshot-controller-745499f584-fcwnb" [ba044c7c-c042-4b61-b3b6-4f66f4c0f284] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 03:43:52.030489    5812 system_pods.go:61] "storage-provisioner" [d26417b4-2e7f-45f0-a5a5-b47866da072a] Running
	I0603 03:43:52.030489    5812 system_pods.go:61] "tiller-deploy-6677d64bcd-gjs64" [27e103ae-c8cb-4f7d-b6b7-e0e003b5f8cc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0603 03:43:52.030531    5812 system_pods.go:74] duration metric: took 162.1456ms to wait for pod list to return data ...
	I0603 03:43:52.030531    5812 default_sa.go:34] waiting for default service account to be created ...
	I0603 03:43:52.216265    5812 default_sa.go:45] found service account: "default"
	I0603 03:43:52.216336    5812 default_sa.go:55] duration metric: took 185.8055ms for default service account to be created ...
	I0603 03:43:52.216336    5812 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 03:43:52.433932    5812 system_pods.go:86] 18 kube-system pods found
	I0603 03:43:52.433932    5812 system_pods.go:89] "coredns-7db6d8ff4d-h2ptk" [81fbdd3f-6d76-494a-a873-c8550d4bc33f] Running
	I0603 03:43:52.434114    5812 system_pods.go:89] "csi-hostpath-attacher-0" [f52f2eaa-9a48-47f0-82b3-9688f7368f8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0603 03:43:52.434114    5812 system_pods.go:89] "csi-hostpath-resizer-0" [4b98b4bd-14cd-4625-99c4-607609f8853e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0603 03:43:52.434114    5812 system_pods.go:89] "csi-hostpathplugin-689hw" [55555f52-e143-4c7a-844e-f54c2df2ecbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0603 03:43:52.434114    5812 system_pods.go:89] "etcd-addons-402100" [9125dcd8-709a-48ce-98ce-251a5dbedbf3] Running
	I0603 03:43:52.434114    5812 system_pods.go:89] "kube-apiserver-addons-402100" [d37a870b-3ca4-46ad-b9d8-434029382739] Running
	I0603 03:43:52.434114    5812 system_pods.go:89] "kube-controller-manager-addons-402100" [31e5d46c-ffb1-459e-9c3c-3b8abe301247] Running
	I0603 03:43:52.434207    5812 system_pods.go:89] "kube-ingress-dns-minikube" [c57681b6-419d-4da8-8f32-0349f299b259] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0603 03:43:52.434207    5812 system_pods.go:89] "kube-proxy-kxf2b" [9cac235c-a93a-4307-a98f-af0d87205244] Running
	I0603 03:43:52.434275    5812 system_pods.go:89] "kube-scheduler-addons-402100" [ef8d0d29-7291-4df0-becf-b61b26412a4e] Running
	I0603 03:43:52.434275    5812 system_pods.go:89] "metrics-server-c59844bb4-wmghb" [12d052b6-5e05-4727-ad22-af68e7eac41f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 03:43:52.434323    5812 system_pods.go:89] "nvidia-device-plugin-daemonset-wq5gk" [d4389b52-e6e3-4329-b22e-44f72dbfe971] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0603 03:43:52.434323    5812 system_pods.go:89] "registry-nx4pc" [54d57b97-dec3-4312-88d3-311d92254848] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0603 03:43:52.434390    5812 system_pods.go:89] "registry-proxy-5xsp7" [20944f26-2fcb-41fa-a385-e6259d737c86] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0603 03:43:52.434424    5812 system_pods.go:89] "snapshot-controller-745499f584-ddh8n" [b9f45d2c-caa1-4d94-86aa-31276a5bb156] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 03:43:52.434424    5812 system_pods.go:89] "snapshot-controller-745499f584-fcwnb" [ba044c7c-c042-4b61-b3b6-4f66f4c0f284] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 03:43:52.434424    5812 system_pods.go:89] "storage-provisioner" [d26417b4-2e7f-45f0-a5a5-b47866da072a] Running
	I0603 03:43:52.434424    5812 system_pods.go:89] "tiller-deploy-6677d64bcd-gjs64" [27e103ae-c8cb-4f7d-b6b7-e0e003b5f8cc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0603 03:43:52.434424    5812 system_pods.go:126] duration metric: took 218.0876ms to wait for k8s-apps to be running ...
	I0603 03:43:52.434521    5812 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 03:43:52.447676    5812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 03:43:52.453064    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:52.455221    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:52.458466    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:52.483458    5812 system_svc.go:56] duration metric: took 48.9745ms WaitForService to wait for kubelet
	I0603 03:43:52.483583    5812 kubeadm.go:576] duration metric: took 44.589795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 03:43:52.483583    5812 node_conditions.go:102] verifying NodePressure condition ...
	I0603 03:43:52.612757    5812 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 03:43:52.612878    5812 node_conditions.go:123] node cpu capacity is 2
	I0603 03:43:52.612958    5812 node_conditions.go:105] duration metric: took 129.3756ms to run NodePressure ...
	I0603 03:43:52.612989    5812 start.go:240] waiting for startup goroutines ...
	I0603 03:43:52.952875    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:52.953651    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:52.969633    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:53.457755    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:53.458340    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:53.459778    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:53.951830    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:53.953175    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:53.969529    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:54.459181    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:54.459181    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:54.467429    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:54.940535    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:54.946354    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:54.964859    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:55.456318    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:55.458307    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:55.462500    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:55.962551    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:55.962551    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:55.964019    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:56.452423    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:56.471451    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:56.471727    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:56.942457    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:56.948091    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:56.962830    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:57.443742    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:57.445373    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:57.459243    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:57.995274    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:57.996702    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:57.997976    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:58.448639    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:58.448639    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:58.456086    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:58.962263    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:58.962553    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:58.962642    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:59.450951    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:59.452674    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:59.466096    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:43:59.959143    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:43:59.965197    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:43:59.968187    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:01.705351    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:01.712980    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:01.714306    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:01.722372    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:01.722547    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:01.728859    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:01.961968    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:01.965590    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:01.974107    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:02.462650    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:02.464624    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:02.471751    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:02.983162    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:02.983367    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:02.983367    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:03.445098    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:03.449027    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:03.454682    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:03.951084    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:03.953653    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:03.959698    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:04.457530    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:04.460088    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:04.461204    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:04.950156    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:04.952145    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:04.957395    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:05.464267    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:05.465069    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:05.465147    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:05.945036    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:05.946674    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:05.962698    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:06.453457    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:06.456592    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:06.463260    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:06.943691    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:06.949767    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:06.959460    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:07.447587    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:07.448603    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:07.454586    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:07.972681    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:07.972681    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:07.982321    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:08.449568    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:08.454477    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:08.463129    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:08.953791    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:08.956363    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:08.959653    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:09.445002    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:09.445178    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:09.463979    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:09.954699    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:09.958076    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:09.962174    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:10.445930    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:10.446115    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:10.460068    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:10.960579    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:10.966592    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:10.970063    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:11.447875    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:11.447985    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:11.464353    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:11.952371    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:11.952371    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:11.958537    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:12.446249    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:12.446963    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:12.463194    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:12.953956    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:12.958706    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:12.967047    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:13.442531    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:13.447426    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:13.458504    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:13.948557    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:13.948704    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:13.955431    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:14.442716    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:14.446377    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:14.461082    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:14.947565    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:14.951101    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:14.960615    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:15.443359    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:15.448568    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:15.456913    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:15.951797    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:15.952746    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:15.964646    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:16.461076    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:16.461657    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:16.465363    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:16.953472    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:16.953794    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:16.961509    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:17.456352    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:17.460996    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:17.463979    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:17.946119    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:17.948427    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:17.956352    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:18.453003    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:18.457225    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:18.459555    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:18.959783    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:18.959857    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:18.962231    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:19.451328    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:19.452048    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:19.456617    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:19.943353    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:19.979224    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:19.979250    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:20.454991    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:20.455610    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:20.462585    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:21.162846    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:21.163238    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:21.166988    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:21.447969    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:21.452877    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:21.464452    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:22.525634    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:22.525703    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:22.526879    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:22.534397    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:22.536949    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:22.537217    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:22.944988    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:22.949430    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:22.977399    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:23.455934    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:23.463832    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:23.468708    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:23.945434    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:23.955517    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:23.966439    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:24.456092    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:24.458131    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:24.459079    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:24.961592    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:24.965572    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:24.969175    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:25.459440    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:25.462449    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:25.463443    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:25.947842    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:25.949826    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:25.957749    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:26.443542    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:26.452120    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:26.460233    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:26.948321    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:26.949136    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:26.956298    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:27.443471    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:27.448988    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:27.460341    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:27.950854    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:27.952767    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:27.959368    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:28.458056    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:28.460058    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:28.463119    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:28.947477    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:28.947477    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:28.967307    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:29.752869    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:29.754559    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:29.754582    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:29.940980    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:29.947900    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:29.958655    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:30.565356    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:30.565570    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:30.568547    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:30.948196    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:30.949635    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:30.956981    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:31.442246    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:31.444244    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:31.458222    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:31.952734    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:31.954551    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:31.961699    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:32.445720    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:32.446742    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:32.459535    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:32.955637    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:32.955637    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:32.963704    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:33.443066    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:33.447484    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:33.458083    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:34.058356    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:34.059315    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:34.065258    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:34.443668    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:34.448070    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:34.458725    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:34.945808    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:34.949076    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:34.963872    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:35.452672    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:35.456263    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:35.460743    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:35.947653    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:35.953243    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:35.959332    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:36.454224    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:36.454344    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:36.460003    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:36.945163    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:36.949771    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:36.958760    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:37.448777    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:37.449349    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:37.455128    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:38.063576    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:38.063674    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:38.065204    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:38.444311    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:38.449626    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:38.462796    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:38.949113    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:38.949235    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:38.959294    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:39.451152    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:39.453843    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:39.459859    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:39.942194    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:39.961761    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:39.962615    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:40.452735    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:40.453361    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:40.458956    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:40.957131    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:40.961915    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:40.966295    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:41.446758    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:41.447914    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:41.463086    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:41.950498    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:41.960408    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:41.960408    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:42.444959    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:42.451128    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:42.459246    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:42.950556    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:42.952405    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:42.961759    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:43.441810    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:43.450503    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:43.457583    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:43.948483    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:43.948483    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:43.962686    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:44.452986    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:44.453220    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:44.462424    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:44.945306    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:44.948118    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:44.960919    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:45.452140    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:45.452365    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:45.458971    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:45.959775    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:45.960377    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:45.961080    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:46.450797    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:46.451632    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:46.456756    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:46.941857    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:46.954289    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:46.963384    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:47.450540    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:47.454048    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:47.458295    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:47.952543    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:47.955555    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:47.961321    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:48.456756    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:48.459897    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:48.460007    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:49.166871    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:49.167070    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:49.167070    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:49.508436    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:49.509062    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:49.511231    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:49.957353    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:49.960157    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:49.967266    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:50.446524    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:50.451645    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 03:44:50.459281    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:50.947901    5812 kapi.go:107] duration metric: took 1m5.0100555s to wait for kubernetes.io/minikube-addons=registry ...
	I0603 03:44:50.951135    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:50.956658    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:51.459056    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:51.461840    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:51.948688    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:51.957137    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:52.460095    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:52.462225    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:52.950415    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:52.959447    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:53.453872    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:53.458942    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:53.945225    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:53.964254    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:54.454775    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:54.463008    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:54.953079    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:54.961823    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:55.443309    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:55.462112    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:55.949071    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:55.958085    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:56.453096    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:56.466549    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:56.943622    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:56.964031    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:57.447995    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:57.463074    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:57.956187    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:57.962667    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:58.450196    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:58.465005    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:58.951830    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:58.958208    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:59.443716    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:59.459678    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:44:59.949734    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:44:59.957348    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:00.443265    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:00.458864    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:00.950638    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:00.958132    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:01.460960    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:01.463153    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:01.952780    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:01.958782    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:02.455494    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:02.461014    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:02.950054    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:02.970116    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:03.461057    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:03.463742    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:03.953651    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:03.964736    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:04.466234    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:04.470819    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:05.055972    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:05.058627    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:05.456119    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:05.460792    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:05.947997    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:05.965824    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:06.460880    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:06.463162    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:06.946979    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:06.964059    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:07.457369    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:07.462916    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:07.947678    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:07.964744    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:08.453145    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:08.458732    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:08.945794    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:08.962385    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:09.452734    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:09.458070    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:09.945639    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:09.963750    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:10.458591    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:10.458658    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:11.145069    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:11.146569    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:11.453650    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:11.459268    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:11.941607    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:11.961504    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:12.444156    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:12.463166    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:12.947362    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:12.970117    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:13.454788    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:13.460884    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:13.947869    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:13.964495    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:14.453636    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:14.459619    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:14.957377    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:14.967458    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:15.470860    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:15.520277    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:15.950342    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:15.988009    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:16.451329    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:16.458489    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:16.946545    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:16.963012    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:17.453904    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:17.459547    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:17.945107    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:17.960024    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:18.460514    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:18.466593    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:18.946807    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:18.960409    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:19.453325    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:19.458587    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:19.951770    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:19.960348    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:20.453865    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:20.460948    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:20.945580    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:20.961795    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:21.449015    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:21.455045    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:21.943384    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:21.963712    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:22.452617    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:22.459096    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:22.953985    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:22.965126    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:23.445901    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:23.463170    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:23.960348    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:23.960552    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:24.459441    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:24.468431    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:24.947802    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:24.970288    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:25.454807    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:25.460027    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:25.944885    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:25.961440    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:26.449790    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:26.456314    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:26.942120    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:26.961030    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:27.445499    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:27.460511    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:27.952535    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:27.959165    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:28.443612    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:28.469878    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:28.960717    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:28.967686    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:29.446149    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:29.462262    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:29.949796    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:29.963028    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:30.444125    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:30.460140    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:30.949779    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:30.957170    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:31.455861    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:31.457231    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:31.958451    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:31.960168    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:32.442793    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:32.460753    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:32.955625    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:32.968524    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:33.443185    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:33.460500    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:33.955518    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:33.960588    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:34.448022    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:34.463507    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:35.152276    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:35.153567    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:35.469376    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:35.481023    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:35.955795    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:35.956943    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:36.447513    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:36.462634    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:36.951382    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:36.955311    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:37.446450    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:37.463613    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:38.303047    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:38.306059    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:38.446731    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:38.466029    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:38.951632    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:38.961538    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:39.455790    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:39.456251    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:39.947092    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:39.967933    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:40.459020    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:40.459072    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:40.953765    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:40.966927    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:41.554013    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:41.556838    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:41.943363    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:41.962056    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:42.453829    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:42.459145    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:42.962692    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:42.965987    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:43.446715    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:43.464028    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:43.958601    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:43.958601    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:44.447829    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:44.467295    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:44.948792    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:44.956918    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:45.449895    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:45.455373    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:45.957584    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:45.961267    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:46.446653    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:46.463538    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:46.953288    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:46.960103    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:47.444535    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:47.463094    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:47.947570    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:47.963318    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:48.452346    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:48.465287    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:48.947851    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:48.973684    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:49.442134    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:49.459995    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:49.955595    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:49.963376    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:50.440844    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:50.457219    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:50.948684    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:50.957078    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:51.458937    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:51.460251    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:51.943683    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:51.960301    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:52.453349    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:52.459975    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:52.948817    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:52.948817    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:53.446248    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:53.461965    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:53.958581    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:53.962045    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:54.444221    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:54.461453    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:54.952977    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:54.963324    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:55.452642    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:55.462685    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:55.959770    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:55.960773    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:56.448356    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:56.465173    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:56.942927    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:57.022919    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:57.456055    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:57.467814    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:57.960791    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:57.967917    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:58.453728    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:58.456544    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:58.947629    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:58.964078    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:59.457954    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:59.458658    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:45:59.950161    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:45:59.956145    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:00.452999    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:00.458564    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:00.947597    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:00.965950    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:01.458348    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:01.459081    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:01.944257    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:01.962282    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:02.448889    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:02.456532    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:02.951604    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:02.957830    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:03.445635    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:03.463313    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:03.958025    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:03.959788    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:04.461555    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:04.464533    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:04.952690    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:04.961069    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:05.440885    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:05.462674    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 03:46:05.951927    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:05.961247    5812 kapi.go:107] duration metric: took 2m19.0209442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0603 03:46:06.445828    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:06.950115    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:07.446293    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:07.951791    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:08.442148    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:08.950851    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:09.457632    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:09.940737    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:10.449347    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:10.942547    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:11.449107    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:11.941075    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:12.449897    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:12.943050    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:13.451878    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:13.941801    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:14.449470    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:14.943178    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:15.452511    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:16.406785    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:16.543368    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:16.956786    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:17.442830    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:17.945577    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:18.447171    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:18.945873    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:19.452153    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:19.940677    5812 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 03:46:20.450840    5812 kapi.go:107] duration metric: took 2m34.5184654s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0603 03:46:33.418770    5812 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 03:46:33.418770    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:33.907464    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:34.412935    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:34.914443    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:35.415710    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:35.912511    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:36.415109    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:36.915677    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:37.419413    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:37.918704    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:38.418989    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:38.906996    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:39.416831    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:39.919288    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:40.417156    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:40.918112    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:41.422233    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:41.922755    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:42.408731    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:42.912803    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:43.412655    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:43.915049    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:44.419007    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:44.907294    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:45.405971    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:45.909736    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:46.409438    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:46.908609    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:47.411573    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:47.910639    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:48.412428    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:48.916282    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:49.414644    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:49.913863    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:50.420692    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:50.914820    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:51.407209    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:51.909841    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:52.412428    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:52.919522    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:53.411661    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:53.918657    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:54.412419    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:54.915760    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:55.415959    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:55.907738    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:56.412669    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:56.915025    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:57.416788    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:57.921060    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:58.421096    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:58.920455    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:59.419798    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:46:59.916609    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:00.417509    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:00.915799    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:01.419656    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:01.915991    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:02.415759    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:02.915839    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:03.418627    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:03.919474    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:04.418923    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:04.922835    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:05.422033    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:05.910405    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:06.417076    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:06.918663    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:07.415575    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:07.906297    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:08.415827    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:08.908795    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:09.411337    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:09.914465    5812 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 03:47:10.417056    5812 kapi.go:107] duration metric: took 3m20.5159162s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0603 03:47:10.419817    5812 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-402100 cluster.
	I0603 03:47:10.422969    5812 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0603 03:47:10.425381    5812 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0603 03:47:10.428567    5812 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, volcano, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, cloud-spanner, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0603 03:47:10.432343    5812 addons.go:510] duration metric: took 4m2.5386571s for enable addons: enabled=[nvidia-device-plugin ingress-dns volcano storage-provisioner metrics-server helm-tiller inspektor-gadget cloud-spanner storage-provisioner-rancher yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0603 03:47:10.432343    5812 start.go:245] waiting for cluster config update ...
	I0603 03:47:10.432343    5812 start.go:254] writing updated cluster config ...
	I0603 03:47:10.444451    5812 ssh_runner.go:195] Run: rm -f paused
	I0603 03:47:10.718015    5812 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 03:47:10.727887    5812 out.go:177] * Done! kubectl is now configured to use "addons-402100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 10:48:00 addons-402100 dockerd[1334]: time="2024-06-03T10:48:00.987424204Z" level=info msg="shim disconnected" id=e982b9d3de6d2b029e40c2495a69ef1ea6e7c369675ff8841f97c7a962bc3c97 namespace=moby
	Jun 03 10:48:00 addons-402100 dockerd[1334]: time="2024-06-03T10:48:00.987476103Z" level=warning msg="cleaning up after shim disconnected" id=e982b9d3de6d2b029e40c2495a69ef1ea6e7c369675ff8841f97c7a962bc3c97 namespace=moby
	Jun 03 10:48:00 addons-402100 dockerd[1334]: time="2024-06-03T10:48:00.987486603Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 10:48:01 addons-402100 dockerd[1328]: time="2024-06-03T10:48:01.428782539Z" level=info msg="ignoring event" container=fe249e9c8eb2525ec59bb92c26c5ae0fcba8682aa754509e79323aa5d2d8991c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 10:48:01 addons-402100 dockerd[1334]: time="2024-06-03T10:48:01.429883418Z" level=info msg="shim disconnected" id=fe249e9c8eb2525ec59bb92c26c5ae0fcba8682aa754509e79323aa5d2d8991c namespace=moby
	Jun 03 10:48:01 addons-402100 dockerd[1334]: time="2024-06-03T10:48:01.430362708Z" level=warning msg="cleaning up after shim disconnected" id=fe249e9c8eb2525ec59bb92c26c5ae0fcba8682aa754509e79323aa5d2d8991c namespace=moby
	Jun 03 10:48:01 addons-402100 dockerd[1334]: time="2024-06-03T10:48:01.430492306Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 10:48:02 addons-402100 dockerd[1334]: time="2024-06-03T10:48:02.995522439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 10:48:02 addons-402100 dockerd[1334]: time="2024-06-03T10:48:02.995675236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 10:48:02 addons-402100 dockerd[1334]: time="2024-06-03T10:48:02.995767534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 10:48:02 addons-402100 dockerd[1334]: time="2024-06-03T10:48:02.997457801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 10:48:03 addons-402100 cri-dockerd[1232]: time="2024-06-03T10:48:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b1cd28efd008aea8daeaa320bfa92275fcb9e4b62e6d31cb7224c5e9cfbf9d77/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 10:48:04 addons-402100 cri-dockerd[1232]: time="2024-06-03T10:48:04Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.202863528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.203042724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.203059924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.203747509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 10:48:05 addons-402100 dockerd[1328]: time="2024-06-03T10:48:05.309075919Z" level=info msg="ignoring event" container=b5a87a02aac4488703256d888bc016f3fc1daa13463f80062186eed74e686f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.309359513Z" level=info msg="shim disconnected" id=b5a87a02aac4488703256d888bc016f3fc1daa13463f80062186eed74e686f50 namespace=moby
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.309412412Z" level=warning msg="cleaning up after shim disconnected" id=b5a87a02aac4488703256d888bc016f3fc1daa13463f80062186eed74e686f50 namespace=moby
	Jun 03 10:48:05 addons-402100 dockerd[1334]: time="2024-06-03T10:48:05.309423112Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 10:48:07 addons-402100 dockerd[1334]: time="2024-06-03T10:48:07.233158528Z" level=info msg="shim disconnected" id=b1cd28efd008aea8daeaa320bfa92275fcb9e4b62e6d31cb7224c5e9cfbf9d77 namespace=moby
	Jun 03 10:48:07 addons-402100 dockerd[1334]: time="2024-06-03T10:48:07.233312225Z" level=warning msg="cleaning up after shim disconnected" id=b1cd28efd008aea8daeaa320bfa92275fcb9e4b62e6d31cb7224c5e9cfbf9d77 namespace=moby
	Jun 03 10:48:07 addons-402100 dockerd[1334]: time="2024-06-03T10:48:07.233330124Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 10:48:07 addons-402100 dockerd[1328]: time="2024-06-03T10:48:07.235131387Z" level=info msg="ignoring event" container=b1cd28efd008aea8daeaa320bfa92275fcb9e4b62e6d31cb7224c5e9cfbf9d77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	b5a87a02aac44       busybox@sha256:5eef5ed34e1e1ff0a4ae850395cbf665c4de6b4b83a32a0bc7bcb998e24e7bbb                                                              3 seconds ago        Exited              busybox                                  0                   b1cd28efd008a       test-local-path
	2261f0150e692       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              9 seconds ago        Exited              helper-pod                               0                   e982b9d3de6d2       helper-pod-create-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec
	58546e9fc8f64       nginx@sha256:69f8c2c72671490607f52122be2af27d4fc09657ff57e42045801aa93d2090f7                                                                14 seconds ago       Running             nginx                                    0                   0f2748e750f4e       nginx
	7fd80ffef2298       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                                        28 seconds ago       Running             headlamp                                 0                   5111d6d6e1d83       headlamp-68456f997b-rr7gl
	eed2cf1674e6f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 59 seconds ago       Running             gcp-auth                                 0                   07e684199a77c       gcp-auth-5db96cd9b4-rf5x6
	46efee6ad1180       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   97a9e572066ad       ingress-nginx-controller-768f948f8f-m4z5c
	b4a7a67865b18       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago        Running             csi-snapshotter                          0                   f32ee7c21ca70       csi-hostpathplugin-689hw
	893bb79b933c9       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          2 minutes ago        Running             csi-provisioner                          0                   f32ee7c21ca70       csi-hostpathplugin-689hw
	37cdcd6231233       fd19c461b125e                                                                                                                                2 minutes ago        Running             admission                                0                   cbcc57914eaea       volcano-admission-7b497cf95b-qb5qk
	4c97f1d2ff717       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   f32ee7c21ca70       csi-hostpathplugin-689hw
	3399ebb4c38f3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   f32ee7c21ca70       csi-hostpathplugin-689hw
	c062966d3ed3c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   f32ee7c21ca70       csi-hostpathplugin-689hw
	c7e79140bfabe       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   f32ee7c21ca70       csi-hostpathplugin-689hw
	05b56ad8cc035       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   6d649040982c7       csi-hostpath-resizer-0
	36a0629b46b86       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   6a4fe9fb528d3       csi-hostpath-attacher-0
	c2cdab493c156       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   1265833ff4d9f       snapshot-controller-745499f584-fcwnb
	1d3792663d32f       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                                               2 minutes ago        Running             volcano-scheduler                        0                   c1086df0a86ca       volcano-scheduler-765f888978-p9clm
	2245d61d1fe88       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   d9ede0378ae2e       snapshot-controller-745499f584-ddh8n
	5b3d5ca2dce90       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                                      2 minutes ago        Running             volcano-controller                       0                   269ae286b78c0       volcano-controller-86c5446455-c9mc8
	9062e270912f6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              patch                                    0                   91b7adb110e9f       ingress-nginx-admission-patch-bg2vw
	0adbf807cb960       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   6e69ca9d0266f       ingress-nginx-admission-create-4nngd
	bfacc6a394358       volcanosh/vc-webhook-manager@sha256:082b6a3b7b8b69d98541a8ea56958ef427fdba54ea555870799f8c9ec2754c1b                                         2 minutes ago        Exited              main                                     0                   00b35ba47972f       volcano-admission-init-t6bvh
	0a9bec507003d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   c6adb5569411f       local-path-provisioner-8d985888d-rnlwk
	dbcc069936f44       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   4aa92b464202c       yakd-dashboard-5ddbf7d777-ddwns
	19d2700ee4cf6       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   e9e9e73a024f4       cloud-spanner-emulator-6fcd4f6f98-hllxv
	cfb3f0ffb41a3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   25cf62a5463a5       kube-ingress-dns-minikube
	e484425f98f65       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   49f5f95a3921c       storage-provisioner
	18cd4876b542c       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   95f9651189ad8       coredns-7db6d8ff4d-h2ptk
	7dce3aa971fa6       747097150317f                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   8c71d73023bc1       kube-proxy-kxf2b
	4a9edcc7f3d97       91be940803172                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   1648e21112c49       kube-apiserver-addons-402100
	a2525ebdc58bf       a52dc94f0a912                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   05cdf505de4f1       kube-scheduler-addons-402100
	9d44ea897e91e       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   71dd3f6ff8655       etcd-addons-402100
	46f4b8556136b       25a1387cdab82                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   9f941bdee9fd4       kube-controller-manager-addons-402100
	
	
	==> controller_ingress [46efee6ad118] <==
	I0603 10:46:19.677344       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0603 10:46:19.696315       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0603 10:46:19.730686       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"7a056414-f98d-4d35-a9a7-b4b31202aede", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0603 10:46:19.731076       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ceb92488-78f3-4e4f-8da9-2a543b5b739d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0603 10:46:19.731256       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"60a10da9-f26c-4691-9898-9f9c5c014974", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0603 10:46:20.900121       7 nginx.go:307] "Starting NGINX process"
	I0603 10:46:20.900455       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0603 10:46:20.903140       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0603 10:46:20.906583       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0603 10:46:20.917698       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0603 10:46:20.918333       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-m4z5c"
	I0603 10:46:20.926687       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-m4z5c" node="addons-402100"
	I0603 10:46:20.965910       7 controller.go:210] "Backend successfully reloaded"
	I0603 10:46:20.966350       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0603 10:46:20.966655       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-m4z5c", UID:"9e2d4812-9441-43d9-b10f-4147a203d01c", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0603 10:47:47.290487       7 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0603 10:47:47.323375       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.033s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.033s testedConfigurationSize:18.1kB}
	I0603 10:47:47.324218       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0603 10:47:47.344048       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0603 10:47:47.345178       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"e3027209-fc92-45a5-b0c9-29dde1172aeb", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0603 10:47:49.052363       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	I0603 10:47:49.053312       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0603 10:47:49.130523       7 controller.go:210] "Backend successfully reloaded"
	I0603 10:47:49.131860       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-m4z5c", UID:"9e2d4812-9441-43d9-b10f-4147a203d01c", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0603 10:47:52.385210       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [18cd4876b542] <==
	[INFO] 10.244.0.8:33522 - 25642 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000509579s
	[INFO] 10.244.0.8:41719 - 48340 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118395s
	[INFO] 10.244.0.8:41719 - 26326 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124795s
	[INFO] 10.244.0.8:54227 - 45741 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091196s
	[INFO] 10.244.0.8:54227 - 17324 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095796s
	[INFO] 10.244.0.8:55516 - 25487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000287388s
	[INFO] 10.244.0.8:55516 - 32141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145594s
	[INFO] 10.244.0.8:47230 - 25445 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130394s
	[INFO] 10.244.0.8:47230 - 51558 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058798s
	[INFO] 10.244.0.8:41492 - 6211 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052698s
	[INFO] 10.244.0.8:41492 - 45888 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055597s
	[INFO] 10.244.0.8:46873 - 46335 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067897s
	[INFO] 10.244.0.8:46873 - 19452 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112395s
	[INFO] 10.244.0.8:52328 - 62150 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048298s
	[INFO] 10.244.0.8:52328 - 10456 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000045598s
	[INFO] 10.244.0.26:44953 - 57308 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543688s
	[INFO] 10.244.0.26:47399 - 6995 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000085798s
	[INFO] 10.244.0.26:41782 - 57497 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000171396s
	[INFO] 10.244.0.26:54301 - 8772 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177996s
	[INFO] 10.244.0.26:34753 - 25595 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087897s
	[INFO] 10.244.0.26:36658 - 22618 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091598s
	[INFO] 10.244.0.26:54547 - 16323 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001836157s
	[INFO] 10.244.0.26:38121 - 3033 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002053751s
	[INFO] 10.244.0.28:40194 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000643189s
	[INFO] 10.244.0.28:36991 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152598s
	
	
	==> describe nodes <==
	Name:               addons-402100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-402100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=addons-402100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T03_42_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-402100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-402100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 10:42:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-402100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 10:48:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 10:48:00 +0000   Mon, 03 Jun 2024 10:42:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 10:48:00 +0000   Mon, 03 Jun 2024 10:42:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 10:48:00 +0000   Mon, 03 Jun 2024 10:42:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 10:48:00 +0000   Mon, 03 Jun 2024 10:42:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.90.102
	  Hostname:    addons-402100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d4a12926b0b40738468b22f650b2640
	  System UUID:                cc43ceb4-c933-0f4d-b85d-03fb69170d2e
	  Boot ID:                    62cbf701-8a72-4879-a8ba-03a944c3dea1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-hllxv      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  gcp-auth                    gcp-auth-5db96cd9b4-rf5x6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  headlamp                    headlamp-68456f997b-rr7gl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-m4z5c    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-7db6d8ff4d-h2ptk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpathplugin-689hw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 etcd-addons-402100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-apiserver-addons-402100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-addons-402100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-proxy-kxf2b                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-addons-402100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 snapshot-controller-745499f584-ddh8n         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 snapshot-controller-745499f584-fcwnb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  local-path-storage          local-path-provisioner-8d985888d-rnlwk       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  volcano-system              volcano-admission-7b497cf95b-qb5qk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  volcano-system              volcano-controller-86c5446455-c9mc8          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  volcano-system              volcano-scheduler-765f888978-p9clm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-ddwns              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node addons-402100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node addons-402100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node addons-402100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m15s                  kubelet          Node addons-402100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s                  kubelet          Node addons-402100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s                  kubelet          Node addons-402100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m13s                  kubelet          Node addons-402100 status is now: NodeReady
	  Normal  RegisteredNode           5m1s                   node-controller  Node addons-402100 event: Registered Node addons-402100 in Controller
	
	
	==> dmesg <==
	[  +9.350637] hrtimer: interrupt took 1246336 ns
	[  +1.363770] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.052682] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.001075] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.022811] kauditd_printk_skb: 116 callbacks suppressed
	[Jun 3 10:44] kauditd_printk_skb: 64 callbacks suppressed
	[ +35.349774] kauditd_printk_skb: 6 callbacks suppressed
	[Jun 3 10:45] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.005905] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.064083] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.008001] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.576872] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.572267] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.033916] kauditd_printk_skb: 7 callbacks suppressed
	[Jun 3 10:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +25.196800] kauditd_printk_skb: 33 callbacks suppressed
	[ +25.571388] kauditd_printk_skb: 72 callbacks suppressed
	[Jun 3 10:47] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.843299] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.001410] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.538193] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.361364] kauditd_printk_skb: 45 callbacks suppressed
	[  +9.188397] kauditd_printk_skb: 30 callbacks suppressed
	[Jun 3 10:48] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.273041] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [9d44ea897e91] <==
	{"level":"info","ts":"2024-06-03T10:45:41.533171Z","caller":"traceutil/trace.go:171","msg":"trace[1002044074] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1263; }","duration":"138.754313ms","start":"2024-06-03T10:45:41.394387Z","end":"2024-06-03T10:45:41.533141Z","steps":["trace[1002044074] 'range keys from in-memory index tree'  (duration: 138.518118ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:45:41.534459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.564525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14515"}
	{"level":"info","ts":"2024-06-03T10:45:41.534508Z","caller":"traceutil/trace.go:171","msg":"trace[770663290] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1263; }","duration":"109.630924ms","start":"2024-06-03T10:45:41.424869Z","end":"2024-06-03T10:45:41.5345Z","steps":["trace[770663290] 'range keys from in-memory index tree'  (duration: 109.495927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:45:47.831127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.597534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-06-03T10:45:47.831298Z","caller":"traceutil/trace.go:171","msg":"trace[854411781] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1281; }","duration":"164.801129ms","start":"2024-06-03T10:45:47.666478Z","end":"2024-06-03T10:45:47.831279Z","steps":["trace[854411781] 'range keys from in-memory index tree'  (duration: 164.38854ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:45:51.607246Z","caller":"traceutil/trace.go:171","msg":"trace[457353628] transaction","detail":"{read_only:false; response_revision:1300; number_of_response:1; }","duration":"141.81767ms","start":"2024-06-03T10:45:51.465407Z","end":"2024-06-03T10:45:51.607225Z","steps":["trace[457353628] 'process raft request'  (duration: 141.589176ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:46:16.384519Z","caller":"traceutil/trace.go:171","msg":"trace[992433830] linearizableReadLoop","detail":"{readStateIndex:1441; appliedIndex:1440; }","duration":"459.99521ms","start":"2024-06-03T10:46:15.924505Z","end":"2024-06-03T10:46:16.384501Z","steps":["trace[992433830] 'read index received'  (duration: 459.867414ms)","trace[992433830] 'applied index is now lower than readState.Index'  (duration: 127.196µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T10:46:16.384802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.2695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14515"}
	{"level":"info","ts":"2024-06-03T10:46:16.384854Z","caller":"traceutil/trace.go:171","msg":"trace[2114746240] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1381; }","duration":"460.354697ms","start":"2024-06-03T10:46:15.92449Z","end":"2024-06-03T10:46:16.384845Z","steps":["trace[2114746240] 'agreement among raft nodes before linearized reading'  (duration: 460.126905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:46:16.384882Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:46:15.924483Z","time spent":"460.390396ms","remote":"127.0.0.1:54950","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14539,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-06-03T10:46:16.385245Z","caller":"traceutil/trace.go:171","msg":"trace[1849157817] transaction","detail":"{read_only:false; response_revision:1381; number_of_response:1; }","duration":"473.720138ms","start":"2024-06-03T10:46:15.911515Z","end":"2024-06-03T10:46:16.385235Z","steps":["trace[1849157817] 'process raft request'  (duration: 472.897667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:46:16.385329Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:46:15.911502Z","time spent":"473.777037ms","remote":"127.0.0.1:55020","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1375 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-06-03T10:46:16.524136Z","caller":"traceutil/trace.go:171","msg":"trace[478774102] transaction","detail":"{read_only:false; response_revision:1382; number_of_response:1; }","duration":"108.498675ms","start":"2024-06-03T10:46:16.415614Z","end":"2024-06-03T10:46:16.524112Z","steps":["trace[478774102] 'process raft request'  (duration: 73.873964ms)","trace[478774102] 'compare'  (duration: 33.651045ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T10:46:18.22109Z","caller":"traceutil/trace.go:171","msg":"trace[1181475567] transaction","detail":"{read_only:false; response_revision:1386; number_of_response:1; }","duration":"193.959429ms","start":"2024-06-03T10:46:18.027111Z","end":"2024-06-03T10:46:18.221071Z","steps":["trace[1181475567] 'process raft request'  (duration: 193.766936ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:46:18.221412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.434039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T10:46:18.221612Z","caller":"traceutil/trace.go:171","msg":"trace[968137549] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1386; }","duration":"138.744529ms","start":"2024-06-03T10:46:18.082857Z","end":"2024-06-03T10:46:18.221602Z","steps":["trace[968137549] 'agreement among raft nodes before linearized reading'  (duration: 138.294444ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:46:18.22109Z","caller":"traceutil/trace.go:171","msg":"trace[1047375247] linearizableReadLoop","detail":"{readStateIndex:1446; appliedIndex:1446; }","duration":"138.170348ms","start":"2024-06-03T10:46:18.082904Z","end":"2024-06-03T10:46:18.221074Z","steps":["trace[1047375247] 'read index received'  (duration: 138.163748ms)","trace[1047375247] 'applied index is now lower than readState.Index'  (duration: 5.7µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T10:47:34.180311Z","caller":"traceutil/trace.go:171","msg":"trace[1655911013] transaction","detail":"{read_only:false; response_revision:1653; number_of_response:1; }","duration":"120.526681ms","start":"2024-06-03T10:47:34.059751Z","end":"2024-06-03T10:47:34.180278Z","steps":["trace[1655911013] 'process raft request'  (duration: 119.950394ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:47:36.786223Z","caller":"traceutil/trace.go:171","msg":"trace[1620706078] transaction","detail":"{read_only:false; response_revision:1660; number_of_response:1; }","duration":"109.041941ms","start":"2024-06-03T10:47:36.677162Z","end":"2024-06-03T10:47:36.786204Z","steps":["trace[1620706078] 'process raft request'  (duration: 107.446115ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:47:36.969012Z","caller":"traceutil/trace.go:171","msg":"trace[846892469] transaction","detail":"{read_only:false; response_revision:1661; number_of_response:1; }","duration":"145.894332ms","start":"2024-06-03T10:47:36.823096Z","end":"2024-06-03T10:47:36.96899Z","steps":["trace[846892469] 'process raft request'  (duration: 104.33156ms)","trace[846892469] 'compare'  (duration: 41.404779ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T10:47:39.640447Z","caller":"traceutil/trace.go:171","msg":"trace[1998603297] transaction","detail":"{read_only:false; response_revision:1676; number_of_response:1; }","duration":"440.753945ms","start":"2024-06-03T10:47:39.19967Z","end":"2024-06-03T10:47:39.640423Z","steps":["trace[1998603297] 'process raft request'  (duration: 440.464558ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T10:47:39.640559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T10:47:39.199644Z","time spent":"440.856339ms","remote":"127.0.0.1:54820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118201,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:1616 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:118164 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >"}
	{"level":"warn","ts":"2024-06-03T10:47:40.019166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.705404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T10:47:40.019232Z","caller":"traceutil/trace.go:171","msg":"trace[750337558] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:1676; }","duration":"275.869797ms","start":"2024-06-03T10:47:39.743347Z","end":"2024-06-03T10:47:40.019217Z","steps":["trace[750337558] 'range keys from in-memory index tree'  (duration: 275.59461ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T10:47:40.01961Z","caller":"traceutil/trace.go:171","msg":"trace[1812199120] transaction","detail":"{read_only:false; response_revision:1677; number_of_response:1; }","duration":"272.383459ms","start":"2024-06-03T10:47:39.747216Z","end":"2024-06-03T10:47:40.0196Z","steps":["trace[1812199120] 'process raft request'  (duration: 193.900001ms)","trace[1812199120] 'compare'  (duration: 78.381063ms)"],"step_count":2}
	
	
	==> gcp-auth [eed2cf1674e6] <==
	2024/06/03 10:47:09 GCP Auth Webhook started!
	2024/06/03 10:47:16 Ready to marshal response ...
	2024/06/03 10:47:16 Ready to write response ...
	2024/06/03 10:47:22 Ready to marshal response ...
	2024/06/03 10:47:22 Ready to write response ...
	2024/06/03 10:47:27 Ready to marshal response ...
	2024/06/03 10:47:27 Ready to write response ...
	2024/06/03 10:47:27 Ready to marshal response ...
	2024/06/03 10:47:27 Ready to write response ...
	2024/06/03 10:47:27 Ready to marshal response ...
	2024/06/03 10:47:27 Ready to write response ...
	2024/06/03 10:47:47 Ready to marshal response ...
	2024/06/03 10:47:47 Ready to write response ...
	2024/06/03 10:47:55 Ready to marshal response ...
	2024/06/03 10:47:55 Ready to write response ...
	2024/06/03 10:47:55 Ready to marshal response ...
	2024/06/03 10:47:55 Ready to write response ...
	
	
	==> kernel <==
	 10:48:08 up 7 min,  0 users,  load average: 2.32, 2.20, 1.10
	Linux addons-402100 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4a9edcc7f3d9] <==
	W0603 10:45:57.857702       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:45:58.934044       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:45:59.960265       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:46:00.989243       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:46:02.028483       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:46:03.090869       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:46:04.184086       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.71.221:443: connect: connection refused
	W0603 10:46:33.300863       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.197.93:443: connect: connection refused
	E0603 10:46:33.300976       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.197.93:443: connect: connection refused
	W0603 10:46:52.388847       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.197.93:443: connect: connection refused
	E0603 10:46:52.389430       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.197.93:443: connect: connection refused
	W0603 10:46:52.629868       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.197.93:443: connect: connection refused
	E0603 10:46:52.630428       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.197.93:443: connect: connection refused
	I0603 10:47:27.521392       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.213.149"}
	I0603 10:47:39.733864       1 trace.go:236] Trace[708150377]: "GuaranteedUpdate etcd3" audit-id:,key:/ranges/serviceips,type:*core.RangeAllocation,resource:serviceipallocations (03-Jun-2024 10:47:39.117) (total time: 523ms):
	Trace[708150377]: ---"Txn call completed" 490ms (10:47:39.641)
	Trace[708150377]: [523.996081ms] [523.996081ms] END
	I0603 10:47:39.736411       1 trace.go:236] Trace[1634522036]: "Delete" accept:application/json,audit-id:eca85bae-fa0d-4392-b233-34aaaa80f3dd,client:127.0.0.1,api-group:,api-version:v1,name:tiller-deploy,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/kube-system/services/tiller-deploy,user-agent:kubectl/v1.30.1 (linux/amd64) kubernetes/6911225,verb:DELETE (03-Jun-2024 10:47:39.035) (total time: 700ms):
	Trace[1634522036]: ---"Object deleted from database" 698ms (10:47:39.734)
	Trace[1634522036]: [700.402094ms] [700.402094ms] END
	I0603 10:47:47.331423       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0603 10:47:47.814599       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.181.253"}
	I0603 10:47:58.235897       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0603 10:48:01.257232       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0603 10:48:02.344019       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [46f4b8556136] <==
	I0603 10:47:10.082041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="32.410731ms"
	I0603 10:47:10.083348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="77.098µs"
	I0603 10:47:26.047398       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 10:47:26.058096       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 10:47:26.196899       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 10:47:26.199315       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 10:47:27.683795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="101.524987ms"
	I0603 10:47:27.709547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="25.645867ms"
	I0603 10:47:27.710660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="41.7µs"
	I0603 10:47:27.772463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="186.497µs"
	I0603 10:47:38.945163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="12.699µs"
	I0603 10:47:41.233676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="96.797µs"
	I0603 10:47:41.381838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="29.606706ms"
	I0603 10:47:41.383222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="69.898µs"
	I0603 10:47:45.385192       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="7.5µs"
	I0603 10:47:54.244552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="23.399µs"
	E0603 10:48:02.351366       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:48:03.468033       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:48:03.468183       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 10:48:05.923203       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 10:48:05.923609       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0603 10:48:07.820015       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 10:48:07.820088       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 10:48:08.230024       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 10:48:08.230685       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [7dce3aa971fa] <==
	I0603 10:43:21.068790       1 server_linux.go:69] "Using iptables proxy"
	I0603 10:43:21.550799       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.90.102"]
	I0603 10:43:21.889017       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 10:43:21.889082       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 10:43:21.889114       1 server_linux.go:165] "Using iptables Proxier"
	I0603 10:43:21.953022       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 10:43:21.953329       1 server.go:872] "Version info" version="v1.30.1"
	I0603 10:43:21.953352       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 10:43:21.999907       1 config.go:192] "Starting service config controller"
	I0603 10:43:22.000967       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 10:43:22.001090       1 config.go:101] "Starting endpoint slice config controller"
	I0603 10:43:22.001104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 10:43:22.018259       1 config.go:319] "Starting node config controller"
	I0603 10:43:22.018285       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 10:43:22.109083       1 shared_informer.go:320] Caches are synced for service config
	I0603 10:43:22.125142       1 shared_informer.go:320] Caches are synced for node config
	I0603 10:43:22.201595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a2525ebdc58b] <==
	W0603 10:42:51.630413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 10:42:51.630522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 10:42:51.637457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 10:42:51.637746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 10:42:51.703868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 10:42:51.704037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 10:42:51.864668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 10:42:51.864723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 10:42:51.897000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 10:42:51.897115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 10:42:51.963308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 10:42:51.963527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 10:42:52.050899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 10:42:52.051059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 10:42:52.067163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 10:42:52.067352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 10:42:52.105185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 10:42:52.105602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 10:42:52.126660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 10:42:52.126981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 10:42:52.207149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 10:42:52.207556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 10:42:52.222479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 10:42:52.222525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 10:42:54.030009       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.456914    2121 topology_manager.go:215] "Topology Admit Handler" podUID="7190b782-2ad6-4f7a-b4d4-412d51139378" podNamespace="default" podName="test-local-path"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: E0603 10:48:02.457820    2121 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: E0603 10:48:02.457958    2121 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b79fb2e-32b4-49d1-85e2-782d2ec257a5" containerName="helper-pod"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: E0603 10:48:02.457976    2121 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: E0603 10:48:02.458045    2121 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: E0603 10:48:02.458053    2121 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: E0603 10:48:02.458070    2121 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12d052b6-5e05-4727-ad22-af68e7eac41f" containerName="metrics-server"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.458147    2121 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.458259    2121 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b79fb2e-32b4-49d1-85e2-782d2ec257a5" containerName="helper-pod"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.458278    2121 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.458287    2121 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e2469b-4e4e-4672-ab43-cc1e7ba8a485" containerName="gadget"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.458295    2121 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d052b6-5e05-4727-ad22-af68e7eac41f" containerName="metrics-server"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.601108    2121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9lw\" (UniqueName: \"kubernetes.io/projected/7190b782-2ad6-4f7a-b4d4-412d51139378-kube-api-access-4t9lw\") pod \"test-local-path\" (UID: \"7190b782-2ad6-4f7a-b4d4-412d51139378\") " pod="default/test-local-path"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.601749    2121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec\" (UniqueName: \"kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec\") pod \"test-local-path\" (UID: \"7190b782-2ad6-4f7a-b4d4-412d51139378\") " pod="default/test-local-path"
	Jun 03 10:48:02 addons-402100 kubelet[2121]: I0603 10:48:02.602213    2121 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-gcp-creds\") pod \"test-local-path\" (UID: \"7190b782-2ad6-4f7a-b4d4-412d51139378\") " pod="default/test-local-path"
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.559306    2121 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t9lw\" (UniqueName: \"kubernetes.io/projected/7190b782-2ad6-4f7a-b4d4-412d51139378-kube-api-access-4t9lw\") pod \"7190b782-2ad6-4f7a-b4d4-412d51139378\" (UID: \"7190b782-2ad6-4f7a-b4d4-412d51139378\") "
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.559435    2121 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-gcp-creds\") pod \"7190b782-2ad6-4f7a-b4d4-412d51139378\" (UID: \"7190b782-2ad6-4f7a-b4d4-412d51139378\") "
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.559473    2121 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec\") pod \"7190b782-2ad6-4f7a-b4d4-412d51139378\" (UID: \"7190b782-2ad6-4f7a-b4d4-412d51139378\") "
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.559593    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec" (OuterVolumeSpecName: "data") pod "7190b782-2ad6-4f7a-b4d4-412d51139378" (UID: "7190b782-2ad6-4f7a-b4d4-412d51139378"). InnerVolumeSpecName "pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.560133    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7190b782-2ad6-4f7a-b4d4-412d51139378" (UID: "7190b782-2ad6-4f7a-b4d4-412d51139378"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.569307    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7190b782-2ad6-4f7a-b4d4-412d51139378-kube-api-access-4t9lw" (OuterVolumeSpecName: "kube-api-access-4t9lw") pod "7190b782-2ad6-4f7a-b4d4-412d51139378" (UID: "7190b782-2ad6-4f7a-b4d4-412d51139378"). InnerVolumeSpecName "kube-api-access-4t9lw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.661008    2121 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4t9lw\" (UniqueName: \"kubernetes.io/projected/7190b782-2ad6-4f7a-b4d4-412d51139378-kube-api-access-4t9lw\") on node \"addons-402100\" DevicePath \"\""
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.661137    2121 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-gcp-creds\") on node \"addons-402100\" DevicePath \"\""
	Jun 03 10:48:07 addons-402100 kubelet[2121]: I0603 10:48:07.661157    2121 reconciler_common.go:289] "Volume detached for volume \"pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec\" (UniqueName: \"kubernetes.io/host-path/7190b782-2ad6-4f7a-b4d4-412d51139378-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec\") on node \"addons-402100\" DevicePath \"\""
	Jun 03 10:48:08 addons-402100 kubelet[2121]: I0603 10:48:08.103686    2121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1cd28efd008aea8daeaa320bfa92275fcb9e4b62e6d31cb7224c5e9cfbf9d77"
	
	
	==> storage-provisioner [e484425f98f6] <==
	I0603 10:43:42.164871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 10:43:42.372613       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 10:43:42.372677       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 10:43:42.920440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 10:43:42.920665       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-402100_9ce8409e-c13a-4dc9-8413-29f8181f27ce!
	I0603 10:43:42.921902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e07343b4-fb3e-4135-a123-ca8958871d8d", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-402100_9ce8409e-c13a-4dc9-8413-29f8181f27ce became leader
	I0603 10:43:43.673452       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-402100_9ce8409e-c13a-4dc9-8413-29f8181f27ce!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 03:47:59.199406    2760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-402100 -n addons-402100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-402100 -n addons-402100: (14.1668906s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-402100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod ingress-nginx-admission-create-4nngd ingress-nginx-admission-patch-bg2vw helper-pod-delete-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec volcano-admission-init-t6bvh
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-402100 describe pod task-pv-pod ingress-nginx-admission-create-4nngd ingress-nginx-admission-patch-bg2vw helper-pod-delete-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec volcano-admission-init-t6bvh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-402100 describe pod task-pv-pod ingress-nginx-admission-create-4nngd ingress-nginx-admission-patch-bg2vw helper-pod-delete-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec volcano-admission-init-t6bvh: exit status 1 (202.0938ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-402100/172.17.90.102
	Start Time:       Mon, 03 Jun 2024 03:48:16 -0700
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvljz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-lvljz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/task-pv-pod to addons-402100
	  Normal  Pulling    6s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4nngd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bg2vw" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec" not found
	Error from server (NotFound): pods "volcano-admission-init-t6bvh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-402100 describe pod task-pv-pod ingress-nginx-admission-create-4nngd ingress-nginx-admission-patch-bg2vw helper-pod-delete-pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec volcano-admission-init-t6bvh: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.60s)

                                                
                                    
x
+
TestForceSystemdEnv (10800.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-668100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0603 06:31:42.788426    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-668100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m5.2996367s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-668100 ssh "docker info --format {{.CgroupDriver}}"
E0603 06:38:39.525543    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-668100 ssh "docker info --format {{.CgroupDriver}}": (10.2354175s)
helpers_test.go:175: Cleaning up "force-systemd-env-668100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-668100
panic: test timed out after 3h0m0s
running tests:
	TestForceSystemdEnv (7m37s)
	TestNetworkPlugins (10m13s)
	TestPause (10m51s)
	TestPause/serial (10m51s)
	TestPause/serial/SecondStartNoReconfiguration (4m30s)
	TestStartStop (10m51s)
	TestStartStop/group/no-preload (1m48s)
	TestStartStop/group/no-preload/serial (1m48s)
	TestStartStop/group/no-preload/serial/FirstStart (1m48s)
	TestStartStop/group/old-k8s-version (2m46s)
	TestStartStop/group/old-k8s-version/serial (2m46s)
	TestStartStop/group/old-k8s-version/serial/FirstStart (2m46s)

                                                
                                                
goroutine 2404 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000462ea0, 0xc00098fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000008990, {0x4b21f80, 0x2a, 0x2a}, {0x2756567?, 0x59806f?, 0x4b45240?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000881900)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000881900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 41 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00061f100)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2237 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0004f96c0, {0x2739884?, 0x24?}, 0xc00079c000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0004f96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0004f96c0, 0xc00067d5f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2063
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2250 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00089b380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00089b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00089b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00089b380, 0xc001714100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2247
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 60 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000967100, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 67 [select, 2 minutes]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 66
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2257 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b151e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b151e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b151e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b151e0, 0xc00078e280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1262 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016c58c0, 0xc00199aea0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 876
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2248 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00089b040, {0x26fbb8c?, 0x0?}, 0xc000070100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00089b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00089b040, 0xc001714080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2247
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2365 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x274f52c95c0?, {0xc000adfb20?, 0x4f7ea5?, 0x4?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x274f52c95c0?, 0xc000adfb80?, 0x4efdd6?, 0x4bd26a0?, 0xc000adfc08?, 0x4e2985?, 0x274efb60eb8?, 0x8000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x77c, {0xc000a7f42d?, 0x2bd3, 0x59417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a32788?, {0xc000a7f42d?, 0x51c1be?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a32788, {0xc000a7f42d, 0x2bd3, 0x2bd3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000794040, {0xc000a7f42d?, 0x274efb6da88?, 0x3e7f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018a8210, {0x375b9e0, 0xc0000a65b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0018a8210}, {0x375b9e0, 0xc0000a65b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x375bb20, 0xc0018a8210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e0c36?, {0x375bb20?, 0xc0018a8210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0018a8210}, {0x375baa0, 0xc000794040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000896580?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2363
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2235 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f8ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0004f8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0004f8ea0, 0xc000b18040)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2247
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 908 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2347 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7fff0def4de0?, {0xc001951a10?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x754, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0018de630)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0018d62c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0018d62c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004f9a00, 0xc0018d62c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x3780740, 0xc0005d0000}, 0xc0004f9a00, {0xc0006685b0?, 0xc02b7270ec?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0004f9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0004f9a00, 0xc00079c000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2237
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 761 [IO wait, 162 minutes]:
internal/poll.runtime_pollWait(0x274f553f7b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000510408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000a556a0, 0xc00168fbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000a55688, 0x31c, {0xc0004925a0?, 0x0?, 0x0?}, 0xc000510008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000a55688, 0xc00168fd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000a55688)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc001f123a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001f123a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0005d20f0, {0x37739a0, 0xc001f123a0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0005d20f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0004f81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 758
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2348 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x31db2483f7712ae4?, {0xc0015dbb20?, 0x4f7ea5?, 0x4bd26a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x274f57e0d50?, 0xc0015dbb80?, 0x4efdd6?, 0x4bd26a0?, 0xc0015dbc08?, 0x4e281b?, 0x274efb60108?, 0x20041?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x67c, {0xc00184f21b?, 0x5e5, 0xc00184f000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000930f08?, {0xc00184f21b?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000930f08, {0xc00184f21b, 0x5e5, 0x5e5})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a70e0, {0xc00184f21b?, 0x274f5381868?, 0x21b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015f62d0, {0x375b9e0, 0xc000794000})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0015f62d0}, {0x375b9e0, 0xc000794000}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015dbe78?, {0x375bb20, 0xc0015f62d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0015dbf38?, {0x375bb20?, 0xc0015f62d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0015f62d0}, {0x375baa0, 0xc0000a70e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000892780?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2347
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 132 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0009670d0, 0x3d)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x21ef6e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b2dda0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000967100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00045d800, {0x375ce20, 0xc000b866c0}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00045d800, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 60
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 133 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3780900, 0xc0000542a0}, 0xc000963f50, 0xc000963f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3780900, 0xc0000542a0}, 0xa0?, 0xc000963f50, 0xc000963f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3780900?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000963fd0?, 0x66e404?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 60
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 59 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b2dec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2277 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b15a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b15a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b15a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b15a00, 0xc00078ea80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2252 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00089b860, 0xc000b0c288)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2061
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2063 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0004f8d00, {0x26fbb8c?, 0xd18c2e2800?}, 0xc00067d5f0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc0004f8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc0004f8d00, 0x32066b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 134 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 133
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2061 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0004f84e0, {0x26fa679?, 0x54f48d?}, 0xc000b0c288)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0004f84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0004f84e0, 0x32066a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2377 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015fe000, 0xc0016ec1e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2374
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2364 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc0004141c0?, {0xc001dbdb20?, 0x4f7ea5?, 0x4bd26a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00185bc41?, 0xc001dbdb80?, 0x4efdd6?, 0x4bd26a0?, 0xc001dbdc08?, 0x4e2985?, 0x274efb60598?, 0xc001497f41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a4, {0xc000543df8?, 0x208, 0x59417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a32288?, {0xc000543df8?, 0x51c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a32288, {0xc000543df8, 0x208, 0x208})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000794028, {0xc000543df8?, 0xc001856540?, 0x6d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018a80f0, {0x375b9e0, 0xc000b0e190})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0018a80f0}, {0x375b9e0, 0xc000b0e190}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001dbde78?, {0x375bb20, 0xc0018a80f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001dbdf38?, {0x375bb20?, 0xc0018a80f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0018a80f0}, {0x375baa0, 0xc000794028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00199a0c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2363
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 907 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3780900, 0xc0000542a0}, 0xc001879f50, 0xc001879f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3780900, 0xc0000542a0}, 0x90?, 0xc001879f50, 0xc001879f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3780900?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001879fd0?, 0x66e404?, 0xc000893c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 920 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c56ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 782
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2362 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0004f9d40, {0x2705239?, 0x60400000004?}, 0xc000070480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0004f9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0004f9d40, 0xc000070200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2251
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2373 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0004f9860, {0x2705239?, 0x60400000004?}, 0xc000070180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0004f9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0004f9860, 0xc000070100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2248
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 906 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0020c65d0, 0x36)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x21ef6e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c56d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0020c6600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000189480, {0x375ce20, 0xc000569680}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000189480, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2376 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x274f54971a8?, {0xc000acbb20?, 0x4f7ea5?, 0x4?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x274f54971a8?, 0xc000acbb80?, 0x4efdd6?, 0x4bd26a0?, 0xc000acbc08?, 0x4e2985?, 0x2e39352e3836312e?, 0x8000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x640, {0xc0014e6426?, 0x3bda, 0x59417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a54a08?, {0xc0014e6426?, 0x51c171?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a54a08, {0xc0014e6426, 0x3bda, 0x3bda})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7438, {0xc0014e6426?, 0xc000acbd98?, 0x3e6c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015f6450, {0x375b9e0, 0xc000794088})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0015f6450}, {0x375b9e0, 0xc000794088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x375bb20, 0xc0015f6450})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e0c36?, {0x375bb20?, 0xc0015f6450?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0015f6450}, {0x375baa0, 0xc0000a7438}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x3206668?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2374
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2256 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b14820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b14820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b14820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b14820, 0xc00078e200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2236 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f9520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0004f9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0004f9520, 0xc000b18d00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2247
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2276 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b156c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b156c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b156c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b156c0, 0xc00078ea00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2350 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018d62c0, 0xc00199a7e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2347
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 693 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7fff0def4de0?, {0xc00006b918?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x710, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001588630)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0015fe160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0015fe160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b15040, 0xc0015fe160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.Cleanup(0xc000b15040, {0xc000680018, 0x18}, 0xc0014ba170)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:178 +0x15f
k8s.io/minikube/test/integration.CleanupWithLogs(0xc000b15040, {0xc000680018, 0x18}, 0xc0014ba170)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:192 +0x19d
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000b15040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:166 +0x54e
testing.tRunner(0xc000b15040, 0x32065f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1188 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc0009249a0, 0xc001f34ae0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1187
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 921 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0020c6600, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 782
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2274 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b15380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b15380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b15380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b15380, 0xc00078e880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2251 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00089b520, {0x26fbb8c?, 0x0?}, 0xc000070200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00089b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00089b520, 0xc001714140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2247
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2253 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00089ba00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00089ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00089ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00089ba00, 0xc00078e080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2349 [syscall, locked to thread]:
syscall.SyscallN(0x274f53801e8?, {0xc00164db20?, 0x4f7ea5?, 0x8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x274f53801e8?, 0xc00164db80?, 0x4efdd6?, 0x4bd26a0?, 0xc00164dc08?, 0x4e2985?, 0x0?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x65c, {0xc001594c8f?, 0x5371, 0x59417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000931688?, {0xc001594c8f?, 0x51c1be?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000931688, {0xc001594c8f, 0x5371, 0x5371})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7170, {0xc001594c8f?, 0xc00164dd98?, 0x7e92?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015f6300, {0x375b9e0, 0xc000620ad8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0015f6300}, {0x375b9e0, 0xc000620ad8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x375bb20, 0xc0015f6300})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e0c36?, {0x375bb20?, 0xc0015f6300?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0015f6300}, {0x375baa0, 0xc0000a7170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000179e00?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2347
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2391 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0008e1b20?, 0x4f7ea5?, 0x4bd26a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x41?, 0xc0008e1b80?, 0x4efdd6?, 0x4bd26a0?, 0xc0008e1c08?, 0x4e281b?, 0x4d8ba6?, 0x41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7cc, {0xc0015f1466?, 0x39a, 0x59417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a54788?, {0xc0015f1466?, 0x51c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a54788, {0xc0015f1466, 0x39a, 0x39a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b0e428, {0xc0015f1466?, 0xc001857dc0?, 0x30?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015f6360, {0x375b9e0, 0xc0000a70a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0015f6360}, {0x375b9e0, 0xc0000a70a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0008e1e78?, {0x375bb20, 0xc0015f6360})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0008e1f38?, {0x375bb20?, 0xc0015f6360?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0015f6360}, {0x375baa0, 0xc000b0e428}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0016ec420?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 693
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2247 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00089a820, 0x32068c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2135
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2275 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b15520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b15520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b15520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b15520, 0xc00078e900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2375 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc00076bb40?, {0xc00076bb20?, 0x4f7ea5?, 0x4bd26a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000050641?, 0xc00076bb80?, 0x4efdd6?, 0x4bd26a0?, 0xc00076bc08?, 0x4e281b?, 0x4d8ba6?, 0xc00098c041?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x784, {0xc000771246?, 0x5ba, 0x59417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a54508?, {0xc000771246?, 0x51c171?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a54508, {0xc000771246, 0x5ba, 0x5ba})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a72d8, {0xc000771246?, 0xc00076bd98?, 0x207?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015f6420, {0x375b9e0, 0xc000620b88})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0015f6420}, {0x375b9e0, 0xc000620b88}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x548277?, {0x375bb20, 0xc0015f6420})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00076bfa0?, {0x375bb20?, 0xc0015f6420?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0015f6420}, {0x375baa0, 0xc0000a72d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x32065c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2374
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2254 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00089bba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00089bba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00089bba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00089bba0, 0xc00078e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2249 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00089b1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00089b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00089b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00089b1e0, 0xc0017140c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2247
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2135 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0000eda00, {0x26fa679?, 0x627333?}, 0x32068c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0000eda00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0000eda00, 0x32066e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2255 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc00060e5a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00089bd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00089bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00089bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00089bd40, 0xc00078e180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2366 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc00057e000, 0xc0016ec360)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2363
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2392 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x6275687469672d6f?, {0xc0016bfb20?, 0x4f7ea5?, 0x4bd26a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4c5255726174614d?, 0xc0016bfb80?, 0x4efdd6?, 0x4bd26a0?, 0xc0016bfc08?, 0x4e281b?, 0x274efb60eb8?, 0x67006f6942746535?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7d4, {0xc0015f113a?, 0x2c6, 0xc0015f1000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a54f08?, {0xc0015f113a?, 0x51c171?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a54f08, {0xc0015f113a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b0e468, {0xc0015f113a?, 0xc0016bfd98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015f6480, {0x375b9e0, 0xc000794100})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375bb20, 0xc0015f6480}, {0x375b9e0, 0xc000794100}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x375bb20, 0xc0015f6480})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e0c36?, {0x375bb20?, 0xc0015f6480?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x375bb20, 0xc0015f6480}, {0x375baa0, 0xc000b0e468}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x632e627568746967?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 693
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2363 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7fff0def4de0?, {0xc0016b9ae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6e8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0019ca780)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00057e000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00057e000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b15860, 0xc00057e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3780740?, 0xc00088a070?}, 0xc000b15860, {0xc00004d278?, 0x665dc710?}, {0xc027716278?, 0xc0016b9f60?}, {0x627333?, 0x578d6f?}, {0xc000b88300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000b15860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000b15860, 0xc000070480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2362
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2393 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015fe160, 0xc00199a840)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 693
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2374 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7fff0def4de0?, {0xc000acfae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6bc, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0019ca2d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0015fe000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0015fe000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004f9ba0, 0xc0015fe000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3780740?, 0xc00088a000?}, 0xc0004f9ba0, {0xc00004c1e0?, 0x665dc6d6?}, {0xc0128b9be4?, 0xc000acff60?}, {0x627333?, 0x578d6f?}, {0xc000000480, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0004f9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0004f9ba0, 0xc000070180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2373
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestErrorSpam/setup (192.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-197000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 --driver=hyperv
E0603 03:52:10.829754    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:10.845119    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:10.860034    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:10.891813    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:10.938787    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:11.032846    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:11.205723    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:11.538065    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:12.188257    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:13.477187    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:16.043543    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:21.167346    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:31.411408    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:52:51.897171    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:53:32.860039    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 03:54:54.788300    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-197000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 --driver=hyperv: (3m12.8775604s)
error_spam_test.go:96: unexpected stderr: "W0603 03:52:04.204648   11504 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-197000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=19008
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-197000" primary control-plane node in "nospam-197000" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-197000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0603 03:52:04.204648   11504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (192.89s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (32.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-754300 -n functional-754300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-754300 -n functional-754300: (11.610362s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 logs -n 25: (8.1693381s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:56 PDT | 03 Jun 24 03:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:56 PDT | 03 Jun 24 03:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:56 PDT | 03 Jun 24 03:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:56 PDT | 03 Jun 24 03:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:56 PDT | 03 Jun 24 03:57 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:57 PDT | 03 Jun 24 03:57 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-197000 --log_dir                                     | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:57 PDT | 03 Jun 24 03:57 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-197000                                            | nospam-197000     | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:57 PDT | 03 Jun 24 03:58 PDT |
	| start   | -p functional-754300                                        | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:58 PDT | 03 Jun 24 04:01 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-754300                                        | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:01 PDT | 03 Jun 24 04:03 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-754300 cache add                                 | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:03 PDT | 03 Jun 24 04:04 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-754300 cache add                                 | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-754300 cache add                                 | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-754300 cache add                                 | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | minikube-local-cache-test:functional-754300                 |                   |                   |         |                     |                     |
	| cache   | functional-754300 cache delete                              | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | minikube-local-cache-test:functional-754300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	| ssh     | functional-754300 ssh sudo                                  | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-754300                                           | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT | 03 Jun 24 04:04 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-754300 ssh                                       | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:04 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-754300 cache reload                              | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:05 PDT | 03 Jun 24 04:05 PDT |
	| ssh     | functional-754300 ssh                                       | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:05 PDT | 03 Jun 24 04:05 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:05 PDT | 03 Jun 24 04:05 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:05 PDT | 03 Jun 24 04:05 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-754300 kubectl --                                | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:05 PDT | 03 Jun 24 04:05 PDT |
	|         | --context functional-754300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 04:01:57
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 04:01:57.439539    8512 out.go:291] Setting OutFile to fd 572 ...
	I0603 04:01:57.440300    8512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:01:57.440300    8512 out.go:304] Setting ErrFile to fd 968...
	I0603 04:01:57.440300    8512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:01:57.462638    8512 out.go:298] Setting JSON to false
	I0603 04:01:57.465659    8512 start.go:129] hostinfo: {"hostname":"minikube1","uptime":1745,"bootTime":1717410772,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 04:01:57.465659    8512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 04:01:57.466537    8512 out.go:177] * [functional-754300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 04:01:57.471278    8512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:01:57.471278    8512 notify.go:220] Checking for updates...
	I0603 04:01:57.475397    8512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 04:01:57.477716    8512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 04:01:57.482474    8512 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 04:01:57.485280    8512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 04:01:57.488595    8512 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:01:57.488675    8512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 04:02:02.592650    8512 out.go:177] * Using the hyperv driver based on existing profile
	I0603 04:02:02.597139    8512 start.go:297] selected driver: hyperv
	I0603 04:02:02.597139    8512 start.go:901] validating driver "hyperv" against &{Name:functional-754300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-754300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.94.139 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:02:02.597139    8512 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 04:02:02.648281    8512 cni.go:84] Creating CNI manager for ""
	I0603 04:02:02.648490    8512 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 04:02:02.648654    8512 start.go:340] cluster config:
	{Name:functional-754300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-754300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.94.139 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:02:02.648654    8512 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 04:02:02.654401    8512 out.go:177] * Starting "functional-754300" primary control-plane node in "functional-754300" cluster
	I0603 04:02:02.657500    8512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:02:02.657657    8512 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 04:02:02.657657    8512 cache.go:56] Caching tarball of preloaded images
	I0603 04:02:02.657657    8512 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:02:02.657657    8512 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:02:02.658346    8512 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\config.json ...
	I0603 04:02:02.659056    8512 start.go:360] acquireMachinesLock for functional-754300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:02:02.660706    8512 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-754300"
	I0603 04:02:02.660824    8512 start.go:96] Skipping create...Using existing machine configuration
	I0603 04:02:02.660824    8512 fix.go:54] fixHost starting: 
	I0603 04:02:02.661460    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:05.317470    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:05.317470    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:05.328788    8512 fix.go:112] recreateIfNeeded on functional-754300: state=Running err=<nil>
	W0603 04:02:05.328788    8512 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 04:02:05.334691    8512 out.go:177] * Updating the running hyperv "functional-754300" VM ...
	I0603 04:02:05.337404    8512 machine.go:94] provisionDockerMachine start ...
	I0603 04:02:05.337404    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:07.437776    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:07.437776    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:07.437776    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:09.951582    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:09.964351    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:09.970559    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:09.970559    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:09.970559    8512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:02:10.099498    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-754300
	
	I0603 04:02:10.099498    8512 buildroot.go:166] provisioning hostname "functional-754300"
	I0603 04:02:10.099498    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:12.153698    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:12.153698    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:12.165571    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:14.673627    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:14.673627    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:14.691342    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:14.691586    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:14.691586    8512 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-754300 && echo "functional-754300" | sudo tee /etc/hostname
	I0603 04:02:14.839479    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-754300
	
	I0603 04:02:14.839479    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:16.927932    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:16.927932    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:16.927932    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:19.395630    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:19.395630    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:19.412625    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:19.413328    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:19.413328    8512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-754300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-754300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-754300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:02:19.540450    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:02:19.540450    8512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:02:19.540596    8512 buildroot.go:174] setting up certificates
	I0603 04:02:19.540596    8512 provision.go:84] configureAuth start
	I0603 04:02:19.540596    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:21.597738    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:21.597738    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:21.608474    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:24.075309    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:24.075309    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:24.075309    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:26.244799    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:26.244799    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:26.256325    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:28.730339    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:28.730339    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:28.730339    8512 provision.go:143] copyHostCerts
	I0603 04:02:28.742796    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:02:28.743226    8512 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:02:28.743314    8512 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:02:28.743770    8512 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:02:28.744450    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:02:28.745135    8512 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:02:28.745135    8512 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:02:28.745205    8512 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:02:28.746835    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:02:28.746835    8512 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:02:28.746835    8512 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:02:28.747555    8512 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:02:28.748615    8512 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-754300 san=[127.0.0.1 172.17.94.139 functional-754300 localhost minikube]
	I0603 04:02:29.054405    8512 provision.go:177] copyRemoteCerts
	I0603 04:02:29.073410    8512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:02:29.073494    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:31.152194    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:31.152194    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:31.152270    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:33.620334    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:33.620334    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:33.632008    8512 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
	I0603 04:02:33.734880    8512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6612817s)
	I0603 04:02:33.734975    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:02:33.735470    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:02:33.783051    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:02:33.783427    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 04:02:33.828052    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:02:33.828345    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:02:33.869052    8512 provision.go:87] duration metric: took 14.3284485s to configureAuth
	I0603 04:02:33.869052    8512 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:02:33.875297    8512 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:02:33.875297    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:35.920056    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:35.920056    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:35.920056    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:38.395504    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:38.395504    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:38.412679    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:38.413243    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:38.413243    8512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:02:38.540328    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:02:38.540328    8512 buildroot.go:70] root file system type: tmpfs
	I0603 04:02:38.540567    8512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:02:38.540567    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:40.676211    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:40.686917    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:40.686917    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:43.185084    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:43.185084    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:43.202497    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:43.203289    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:43.203289    8512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:02:43.355102    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:02:43.355102    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:45.449421    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:45.449421    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:45.449421    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:47.904260    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:47.904260    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:47.920802    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:47.921571    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:47.921571    8512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:02:48.067502    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:02:48.067567    8512 machine.go:97] duration metric: took 42.7301169s to provisionDockerMachine
	I0603 04:02:48.067567    8512 start.go:293] postStartSetup for "functional-754300" (driver="hyperv")
	I0603 04:02:48.067626    8512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:02:48.078386    8512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:02:48.078386    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:50.182022    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:50.182022    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:50.182022    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:52.651700    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:52.651700    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:52.663444    8512 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
	I0603 04:02:52.769894    8512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6915008s)
	I0603 04:02:52.780179    8512 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:02:52.792985    8512 command_runner.go:130] > NAME=Buildroot
	I0603 04:02:52.793019    8512 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 04:02:52.793019    8512 command_runner.go:130] > ID=buildroot
	I0603 04:02:52.793019    8512 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 04:02:52.793019    8512 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 04:02:52.793019    8512 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:02:52.793019    8512 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:02:52.793761    8512 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:02:52.794405    8512 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:02:52.794978    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:02:52.795928    8512 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\7364\hosts -> hosts in /etc/test/nested/copy/7364
	I0603 04:02:52.795928    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\7364\hosts -> /etc/test/nested/copy/7364/hosts
	I0603 04:02:52.809130    8512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7364
	I0603 04:02:52.826204    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:02:52.876532    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\7364\hosts --> /etc/test/nested/copy/7364/hosts (40 bytes)
	I0603 04:02:52.921722    8512 start.go:296] duration metric: took 4.8541479s for postStartSetup
	I0603 04:02:52.921722    8512 fix.go:56] duration metric: took 50.2608402s for fixHost
	I0603 04:02:52.921722    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:55.015137    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:55.015137    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:55.015137    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:02:57.475441    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:02:57.475441    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:57.491855    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:02:57.492508    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:02:57.492508    8512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:02:57.615563    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717412577.619992410
	
	I0603 04:02:57.615563    8512 fix.go:216] guest clock: 1717412577.619992410
	I0603 04:02:57.615563    8512 fix.go:229] Guest: 2024-06-03 04:02:57.61999241 -0700 PDT Remote: 2024-06-03 04:02:52.9217227 -0700 PDT m=+55.565312301 (delta=4.69826971s)
	I0603 04:02:57.616095    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:02:59.686133    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:02:59.686324    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:02:59.686408    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:03:02.180749    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:03:02.180749    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:02.195839    8512 main.go:141] libmachine: Using SSH client type: native
	I0603 04:03:02.196517    8512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.139 22 <nil> <nil>}
	I0603 04:03:02.196517    8512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717412577
	I0603 04:03:02.332682    8512 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:02:57 UTC 2024
	
	I0603 04:03:02.335325    8512 fix.go:236] clock set: Mon Jun  3 11:02:57 UTC 2024
	 (err=<nil>)
	I0603 04:03:02.335325    8512 start.go:83] releasing machines lock for "functional-754300", held for 59.6744899s
	I0603 04:03:02.335325    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:04.400466    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:04.400466    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:04.416211    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:03:06.879741    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:03:06.889447    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:06.893269    8512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:03:06.893269    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:06.907094    8512 ssh_runner.go:195] Run: cat /version.json
	I0603 04:03:06.907094    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:09.040958    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:09.040958    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:09.053472    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:03:09.074004    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:09.074095    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:09.074157    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:03:11.702897    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:03:11.702897    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:11.704969    8512 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
	I0603 04:03:11.730081    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:03:11.730155    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:11.730382    8512 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
	I0603 04:03:11.845252    8512 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 04:03:11.845330    8512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9520535s)
	I0603 04:03:11.845330    8512 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 04:03:11.845330    8512 ssh_runner.go:235] Completed: cat /version.json: (4.9382288s)
	I0603 04:03:11.857965    8512 ssh_runner.go:195] Run: systemctl --version
	I0603 04:03:11.860077    8512 command_runner.go:130] > systemd 252 (252)
	I0603 04:03:11.860077    8512 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 04:03:11.881168    8512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 04:03:11.884198    8512 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 04:03:11.890283    8512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:03:11.902265    8512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:03:11.919667    8512 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 04:03:11.919667    8512 start.go:494] detecting cgroup driver to use...
	I0603 04:03:11.920031    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:03:11.956062    8512 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 04:03:11.966242    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:03:12.000883    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:03:12.020347    8512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:03:12.035244    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:03:12.067557    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:03:12.097719    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:03:12.134781    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:03:12.163368    8512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:03:12.200813    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:03:12.230134    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:03:12.259738    8512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:03:12.291638    8512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:03:12.302245    8512 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 04:03:12.323538    8512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:03:12.361568    8512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:03:12.628125    8512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:03:12.656568    8512 start.go:494] detecting cgroup driver to use...
	I0603 04:03:12.680261    8512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:03:12.701258    8512 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 04:03:12.701353    8512 command_runner.go:130] > [Unit]
	I0603 04:03:12.701353    8512 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 04:03:12.701353    8512 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 04:03:12.701353    8512 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 04:03:12.701353    8512 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 04:03:12.701353    8512 command_runner.go:130] > StartLimitBurst=3
	I0603 04:03:12.701353    8512 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 04:03:12.701353    8512 command_runner.go:130] > [Service]
	I0603 04:03:12.701353    8512 command_runner.go:130] > Type=notify
	I0603 04:03:12.701435    8512 command_runner.go:130] > Restart=on-failure
	I0603 04:03:12.701435    8512 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 04:03:12.701435    8512 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 04:03:12.701435    8512 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 04:03:12.701435    8512 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 04:03:12.701503    8512 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 04:03:12.701503    8512 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 04:03:12.701503    8512 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 04:03:12.701503    8512 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 04:03:12.701503    8512 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 04:03:12.701503    8512 command_runner.go:130] > ExecStart=
	I0603 04:03:12.701503    8512 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 04:03:12.701503    8512 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 04:03:12.701503    8512 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 04:03:12.701503    8512 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 04:03:12.701503    8512 command_runner.go:130] > LimitNOFILE=infinity
	I0603 04:03:12.701503    8512 command_runner.go:130] > LimitNPROC=infinity
	I0603 04:03:12.701503    8512 command_runner.go:130] > LimitCORE=infinity
	I0603 04:03:12.701503    8512 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 04:03:12.701503    8512 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 04:03:12.701503    8512 command_runner.go:130] > TasksMax=infinity
	I0603 04:03:12.701503    8512 command_runner.go:130] > TimeoutStartSec=0
	I0603 04:03:12.701503    8512 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 04:03:12.701503    8512 command_runner.go:130] > Delegate=yes
	I0603 04:03:12.701503    8512 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 04:03:12.701503    8512 command_runner.go:130] > KillMode=process
	I0603 04:03:12.701503    8512 command_runner.go:130] > [Install]
	I0603 04:03:12.701503    8512 command_runner.go:130] > WantedBy=multi-user.target
	I0603 04:03:12.714199    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:03:12.747813    8512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:03:12.797573    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:03:12.835364    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:03:12.858166    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:03:12.892217    8512 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 04:03:12.903504    8512 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:03:12.907979    8512 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 04:03:12.926441    8512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:03:12.943793    8512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:03:12.987904    8512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:03:13.255286    8512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:03:13.480789    8512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:03:13.480882    8512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:03:13.530434    8512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:03:13.770676    8512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:03:26.702855    8512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.932112s)
	I0603 04:03:26.716841    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:03:26.760891    8512 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0603 04:03:26.820728    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:03:26.857372    8512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:03:27.073863    8512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:03:27.260109    8512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:03:27.447536    8512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:03:27.490746    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:03:27.537636    8512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:03:27.739592    8512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:03:27.859342    8512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:03:27.871155    8512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:03:27.879625    8512 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 04:03:27.879625    8512 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 04:03:27.879625    8512 command_runner.go:130] > Device: 0,22	Inode: 1499        Links: 1
	I0603 04:03:27.879772    8512 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 04:03:27.879772    8512 command_runner.go:130] > Access: 2024-06-03 11:03:27.768583746 +0000
	I0603 04:03:27.879772    8512 command_runner.go:130] > Modify: 2024-06-03 11:03:27.768583746 +0000
	I0603 04:03:27.879772    8512 command_runner.go:130] > Change: 2024-06-03 11:03:27.772583673 +0000
	I0603 04:03:27.879772    8512 command_runner.go:130] >  Birth: -
	I0603 04:03:27.879853    8512 start.go:562] Will wait 60s for crictl version
	I0603 04:03:27.893383    8512 ssh_runner.go:195] Run: which crictl
	I0603 04:03:27.899514    8512 command_runner.go:130] > /usr/bin/crictl
	I0603 04:03:27.913071    8512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:03:27.965545    8512 command_runner.go:130] > Version:  0.1.0
	I0603 04:03:27.965545    8512 command_runner.go:130] > RuntimeName:  docker
	I0603 04:03:27.965545    8512 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 04:03:27.965545    8512 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 04:03:27.965545    8512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:03:27.976975    8512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:03:28.017837    8512 command_runner.go:130] > 26.0.2
	I0603 04:03:28.029361    8512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:03:28.072942    8512 command_runner.go:130] > 26.0.2
	I0603 04:03:28.078709    8512 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:03:28.078709    8512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:03:28.083529    8512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:03:28.083529    8512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:03:28.083529    8512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:03:28.083529    8512 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:03:28.085710    8512 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:03:28.085710    8512 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:03:28.094496    8512 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:03:28.105613    8512 command_runner.go:130] > 172.17.80.1	host.minikube.internal
	I0603 04:03:28.106141    8512 kubeadm.go:877] updating cluster {Name:functional-754300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-754300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.94.139 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 04:03:28.106329    8512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:03:28.115671    8512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 04:03:28.138910    8512 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 04:03:28.138973    8512 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 04:03:28.138973    8512 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 04:03:28.138973    8512 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 04:03:28.139067    8512 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 04:03:28.139067    8512 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 04:03:28.139142    8512 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 04:03:28.139174    8512 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 04:03:28.139240    8512 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 04:03:28.139278    8512 docker.go:615] Images already preloaded, skipping extraction
	I0603 04:03:28.150893    8512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 04:03:28.173102    8512 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 04:03:28.173102    8512 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 04:03:28.173102    8512 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 04:03:28.173102    8512 cache_images.go:84] Images are preloaded, skipping loading
	I0603 04:03:28.173102    8512 kubeadm.go:928] updating node { 172.17.94.139 8441 v1.30.1 docker true true} ...
	I0603 04:03:28.173102    8512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-754300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.94.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-754300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:03:28.184460    8512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 04:03:28.211862    8512 command_runner.go:130] > cgroupfs
	I0603 04:03:28.213107    8512 cni.go:84] Creating CNI manager for ""
	I0603 04:03:28.213144    8512 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 04:03:28.213144    8512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 04:03:28.213225    8512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.94.139 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-754300 NodeName:functional-754300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.94.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.94.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 04:03:28.214670    8512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.94.139
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-754300"
	  kubeletExtraArgs:
	    node-ip: 172.17.94.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.94.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 04:03:28.233925    8512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:03:28.244033    8512 command_runner.go:130] > kubeadm
	I0603 04:03:28.251236    8512 command_runner.go:130] > kubectl
	I0603 04:03:28.251236    8512 command_runner.go:130] > kubelet
	I0603 04:03:28.251236    8512 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 04:03:28.261472    8512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 04:03:28.279960    8512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 04:03:28.310257    8512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:03:28.340830    8512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 04:03:28.378839    8512 ssh_runner.go:195] Run: grep 172.17.94.139	control-plane.minikube.internal$ /etc/hosts
	I0603 04:03:28.385131    8512 command_runner.go:130] > 172.17.94.139	control-plane.minikube.internal
	I0603 04:03:28.397065    8512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:03:28.587961    8512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:03:28.613496    8512 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300 for IP: 172.17.94.139
	I0603 04:03:28.615501    8512 certs.go:194] generating shared ca certs ...
	I0603 04:03:28.615501    8512 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:03:28.616254    8512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:03:28.616797    8512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:03:28.616911    8512 certs.go:256] generating profile certs ...
	I0603 04:03:28.618070    8512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.key
	I0603 04:03:28.618070    8512 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\apiserver.key.88f40bcc
	I0603 04:03:28.618747    8512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\proxy-client.key
	I0603 04:03:28.618747    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:03:28.618747    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:03:28.618747    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:03:28.619339    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:03:28.619544    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:03:28.619544    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:03:28.619544    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:03:28.619544    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:03:28.620116    8512 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:03:28.620840    8512 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:03:28.620840    8512 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:03:28.620840    8512 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:03:28.621484    8512 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:03:28.621484    8512 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:03:28.622144    8512 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:03:28.622144    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:03:28.622739    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:03:28.622807    8512 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:03:28.624087    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:03:28.667665    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:03:28.709454    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:03:28.752325    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:03:28.798698    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 04:03:28.865652    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 04:03:28.919352    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:03:29.011084    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:03:29.077118    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:03:29.127484    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:03:29.181201    8512 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:03:29.232228    8512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 04:03:29.283778    8512 ssh_runner.go:195] Run: openssl version
	I0603 04:03:29.293167    8512 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 04:03:29.307944    8512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:03:29.343012    8512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:03:29.352236    8512 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:03:29.352236    8512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:03:29.364925    8512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:03:29.373283    8512 command_runner.go:130] > 3ec20f2e
	I0603 04:03:29.386477    8512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:03:29.418477    8512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:03:29.451096    8512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:03:29.458827    8512 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:03:29.458928    8512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:03:29.470894    8512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:03:29.479592    8512 command_runner.go:130] > b5213941
	I0603 04:03:29.491884    8512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:03:29.524107    8512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:03:29.559053    8512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:03:29.567061    8512 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:03:29.567061    8512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:03:29.581144    8512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:03:29.590600    8512 command_runner.go:130] > 51391683
	I0603 04:03:29.606285    8512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:03:29.645712    8512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:03:29.654802    8512 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:03:29.654802    8512 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 04:03:29.654802    8512 command_runner.go:130] > Device: 8,1	Inode: 1055058     Links: 1
	I0603 04:03:29.654802    8512 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 04:03:29.654802    8512 command_runner.go:130] > Access: 2024-06-03 11:00:49.550234995 +0000
	I0603 04:03:29.654802    8512 command_runner.go:130] > Modify: 2024-06-03 11:00:49.550234995 +0000
	I0603 04:03:29.654802    8512 command_runner.go:130] > Change: 2024-06-03 11:00:49.550234995 +0000
	I0603 04:03:29.654802    8512 command_runner.go:130] >  Birth: 2024-06-03 11:00:49.550234995 +0000
	I0603 04:03:29.666422    8512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 04:03:29.676664    8512 command_runner.go:130] > Certificate will not expire
	I0603 04:03:29.690612    8512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 04:03:29.696194    8512 command_runner.go:130] > Certificate will not expire
	I0603 04:03:29.718402    8512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 04:03:29.730097    8512 command_runner.go:130] > Certificate will not expire
	I0603 04:03:29.743251    8512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 04:03:29.751833    8512 command_runner.go:130] > Certificate will not expire
	I0603 04:03:29.767374    8512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 04:03:29.774750    8512 command_runner.go:130] > Certificate will not expire
	I0603 04:03:29.794308    8512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 04:03:29.806649    8512 command_runner.go:130] > Certificate will not expire
	I0603 04:03:29.806649    8512 kubeadm.go:391] StartCluster: {Name:functional-754300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-754300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.94.139 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:03:29.819072    8512 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 04:03:29.882461    8512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 04:03:29.901278    8512 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0603 04:03:29.901278    8512 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0603 04:03:29.901278    8512 command_runner.go:130] > /var/lib/minikube/etcd:
	I0603 04:03:29.901278    8512 command_runner.go:130] > member
	W0603 04:03:29.901278    8512 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 04:03:29.901278    8512 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 04:03:29.901278    8512 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 04:03:29.916664    8512 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 04:03:29.985876    8512 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 04:03:29.991244    8512 kubeconfig.go:125] found "functional-754300" server: "https://172.17.94.139:8441"
	I0603 04:03:29.992606    8512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:03:29.993289    8512 kapi.go:59] client config for functional-754300: &rest.Config{Host:"https://172.17.94.139:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-754300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-754300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 04:03:29.994751    8512 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 04:03:30.009877    8512 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 04:03:30.035412    8512 kubeadm.go:624] The running cluster does not require reconfiguration: 172.17.94.139
	I0603 04:03:30.035637    8512 kubeadm.go:1154] stopping kube-system containers ...
	I0603 04:03:30.046940    8512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 04:03:30.096217    8512 command_runner.go:130] > ad2f48676914
	I0603 04:03:30.096217    8512 command_runner.go:130] > 78af648444af
	I0603 04:03:30.096217    8512 command_runner.go:130] > 5a3784043c01
	I0603 04:03:30.096217    8512 command_runner.go:130] > 01457c35697c
	I0603 04:03:30.096217    8512 command_runner.go:130] > f3c25fe55ebd
	I0603 04:03:30.096217    8512 command_runner.go:130] > 9dc73769a4aa
	I0603 04:03:30.096217    8512 command_runner.go:130] > 56a74ddb22b0
	I0603 04:03:30.096330    8512 command_runner.go:130] > bf568585498f
	I0603 04:03:30.096330    8512 command_runner.go:130] > a9d418f63f2a
	I0603 04:03:30.096330    8512 command_runner.go:130] > e828e836c3eb
	I0603 04:03:30.096330    8512 command_runner.go:130] > a99ea32f8ab3
	I0603 04:03:30.096330    8512 command_runner.go:130] > 02d8b96b2cf5
	I0603 04:03:30.096330    8512 command_runner.go:130] > 020ef032055a
	I0603 04:03:30.096330    8512 command_runner.go:130] > 5ff4b6f01e35
	I0603 04:03:30.096330    8512 command_runner.go:130] > a21a265088f6
	I0603 04:03:30.096330    8512 command_runner.go:130] > 2b29438c873e
	I0603 04:03:30.096330    8512 command_runner.go:130] > 91c70733b05f
	I0603 04:03:30.096330    8512 command_runner.go:130] > ac9f1dba44ee
	I0603 04:03:30.096405    8512 command_runner.go:130] > 8192e5482a70
	I0603 04:03:30.096405    8512 command_runner.go:130] > c97fa507a943
	I0603 04:03:30.096405    8512 command_runner.go:130] > 7e36389bd34c
	I0603 04:03:30.096405    8512 command_runner.go:130] > 681e9bcaf47a
	I0603 04:03:30.096405    8512 command_runner.go:130] > acbf42a98f34
	I0603 04:03:30.096439    8512 command_runner.go:130] > 7038e868521e
	I0603 04:03:30.096439    8512 command_runner.go:130] > d0cdf60ab102
	I0603 04:03:30.096439    8512 command_runner.go:130] > dade41725926
	I0603 04:03:30.096439    8512 command_runner.go:130] > 9478dc5ba6de
	I0603 04:03:30.096523    8512 docker.go:483] Stopping containers: [ad2f48676914 78af648444af 5a3784043c01 01457c35697c f3c25fe55ebd 9dc73769a4aa 56a74ddb22b0 bf568585498f a9d418f63f2a e828e836c3eb a99ea32f8ab3 02d8b96b2cf5 020ef032055a 5ff4b6f01e35 a21a265088f6 2b29438c873e 91c70733b05f ac9f1dba44ee 8192e5482a70 c97fa507a943 7e36389bd34c 681e9bcaf47a acbf42a98f34 7038e868521e d0cdf60ab102 dade41725926 9478dc5ba6de]
	I0603 04:03:30.106265    8512 ssh_runner.go:195] Run: docker stop ad2f48676914 78af648444af 5a3784043c01 01457c35697c f3c25fe55ebd 9dc73769a4aa 56a74ddb22b0 bf568585498f a9d418f63f2a e828e836c3eb a99ea32f8ab3 02d8b96b2cf5 020ef032055a 5ff4b6f01e35 a21a265088f6 2b29438c873e 91c70733b05f ac9f1dba44ee 8192e5482a70 c97fa507a943 7e36389bd34c 681e9bcaf47a acbf42a98f34 7038e868521e d0cdf60ab102 dade41725926 9478dc5ba6de
	I0603 04:03:30.905001    8512 command_runner.go:130] > ad2f48676914
	I0603 04:03:30.906366    8512 command_runner.go:130] > 78af648444af
	I0603 04:03:30.906366    8512 command_runner.go:130] > 5a3784043c01
	I0603 04:03:30.906366    8512 command_runner.go:130] > 01457c35697c
	I0603 04:03:30.906366    8512 command_runner.go:130] > f3c25fe55ebd
	I0603 04:03:30.906366    8512 command_runner.go:130] > 9dc73769a4aa
	I0603 04:03:30.906366    8512 command_runner.go:130] > 56a74ddb22b0
	I0603 04:03:30.906366    8512 command_runner.go:130] > bf568585498f
	I0603 04:03:30.906366    8512 command_runner.go:130] > a9d418f63f2a
	I0603 04:03:30.906366    8512 command_runner.go:130] > e828e836c3eb
	I0603 04:03:30.906366    8512 command_runner.go:130] > a99ea32f8ab3
	I0603 04:03:30.906476    8512 command_runner.go:130] > 02d8b96b2cf5
	I0603 04:03:30.906476    8512 command_runner.go:130] > 020ef032055a
	I0603 04:03:30.906476    8512 command_runner.go:130] > 5ff4b6f01e35
	I0603 04:03:30.906476    8512 command_runner.go:130] > a21a265088f6
	I0603 04:03:30.906476    8512 command_runner.go:130] > 2b29438c873e
	I0603 04:03:30.906476    8512 command_runner.go:130] > 91c70733b05f
	I0603 04:03:30.906476    8512 command_runner.go:130] > ac9f1dba44ee
	I0603 04:03:30.906476    8512 command_runner.go:130] > 8192e5482a70
	I0603 04:03:30.906476    8512 command_runner.go:130] > c97fa507a943
	I0603 04:03:30.906476    8512 command_runner.go:130] > 7e36389bd34c
	I0603 04:03:30.906476    8512 command_runner.go:130] > 681e9bcaf47a
	I0603 04:03:30.906573    8512 command_runner.go:130] > acbf42a98f34
	I0603 04:03:30.906573    8512 command_runner.go:130] > 7038e868521e
	I0603 04:03:30.906573    8512 command_runner.go:130] > d0cdf60ab102
	I0603 04:03:30.906573    8512 command_runner.go:130] > dade41725926
	I0603 04:03:30.906573    8512 command_runner.go:130] > 9478dc5ba6de
	I0603 04:03:30.920277    8512 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 04:03:31.006572    8512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 04:03:31.030827    8512 command_runner.go:130] > -rw------- 1 root root 5651 Jun  3 11:00 /etc/kubernetes/admin.conf
	I0603 04:03:31.030873    8512 command_runner.go:130] > -rw------- 1 root root 5653 Jun  3 11:00 /etc/kubernetes/controller-manager.conf
	I0603 04:03:31.030873    8512 command_runner.go:130] > -rw------- 1 root root 2007 Jun  3 11:01 /etc/kubernetes/kubelet.conf
	I0603 04:03:31.030905    8512 command_runner.go:130] > -rw------- 1 root root 5601 Jun  3 11:00 /etc/kubernetes/scheduler.conf
	I0603 04:03:31.031059    8512 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Jun  3 11:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun  3 11:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun  3 11:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun  3 11:00 /etc/kubernetes/scheduler.conf
	
	I0603 04:03:31.044099    8512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0603 04:03:31.059668    8512 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0603 04:03:31.076420    8512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0603 04:03:31.088535    8512 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0603 04:03:31.103016    8512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0603 04:03:31.106299    8512 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0603 04:03:31.134145    8512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 04:03:31.161524    8512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0603 04:03:31.170948    8512 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0603 04:03:31.189290    8512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 04:03:31.221066    8512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 04:03:31.237421    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 04:03:31.307956    8512 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 04:03:31.310401    8512 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 04:03:31.310466    8512 command_runner.go:130] > [certs] Using the existing "sa" key
	I0603 04:03:31.310466    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 04:03:32.383711    8512 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 04:03:32.385876    8512 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0603 04:03:32.385924    8512 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0603 04:03:32.385924    8512 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0603 04:03:32.385924    8512 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 04:03:32.385961    8512 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 04:03:32.385997    8512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0755299s)
	I0603 04:03:32.385997    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 04:03:32.672999    8512 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 04:03:32.672999    8512 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 04:03:32.672999    8512 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 04:03:32.672999    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 04:03:32.759631    8512 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 04:03:32.762406    8512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 04:03:32.765207    8512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 04:03:32.766524    8512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 04:03:32.771531    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 04:03:32.910792    8512 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 04:03:32.911037    8512 api_server.go:52] waiting for apiserver process to appear ...
	I0603 04:03:32.924616    8512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:03:33.439079    8512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:03:33.928698    8512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:03:34.441853    8512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:03:34.472999    8512 command_runner.go:130] > 5737
	I0603 04:03:34.472999    8512 api_server.go:72] duration metric: took 1.5619597s to wait for apiserver process to appear ...
	I0603 04:03:34.472999    8512 api_server.go:88] waiting for apiserver healthz status ...
	I0603 04:03:34.472999    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:37.390532    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 04:03:37.390843    8512 api_server.go:103] status: https://172.17.94.139:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 04:03:37.390843    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:37.428517    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 04:03:37.428517    8512 api_server.go:103] status: https://172.17.94.139:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 04:03:37.473528    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:37.522404    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 04:03:37.522404    8512 api_server.go:103] status: https://172.17.94.139:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 04:03:37.988484    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:37.996441    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 04:03:37.996522    8512 api_server.go:103] status: https://172.17.94.139:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 04:03:38.490125    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:38.503830    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 04:03:38.503930    8512 api_server.go:103] status: https://172.17.94.139:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 04:03:38.977539    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:38.984926    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 200:
	ok
	I0603 04:03:38.989414    8512 round_trippers.go:463] GET https://172.17.94.139:8441/version
	I0603 04:03:38.989414    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:38.989414    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:38.989414    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.016384    8512 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0603 04:03:39.017617    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.017617    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.017617    8512 round_trippers.go:580]     Audit-Id: 5f93cc61-f43b-4774-954f-6ef9ae0824da
	I0603 04:03:39.017690    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.017690    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.017690    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.017690    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.017690    8512 round_trippers.go:580]     Content-Length: 263
	I0603 04:03:39.017786    8512 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 04:03:39.017934    8512 api_server.go:141] control plane version: v1.30.1
	I0603 04:03:39.018051    8512 api_server.go:131] duration metric: took 4.5450451s to wait for apiserver health ...
	I0603 04:03:39.018088    8512 cni.go:84] Creating CNI manager for ""
	I0603 04:03:39.018088    8512 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 04:03:39.020548    8512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 04:03:39.035452    8512 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 04:03:39.057836    8512 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 04:03:39.110799    8512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 04:03:39.110867    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:39.110867    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:39.110867    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:39.110867    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.122837    8512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0603 04:03:39.124484    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.124484    8512 round_trippers.go:580]     Audit-Id: 5925f80c-c986-451b-acff-657996decd53
	I0603 04:03:39.124484    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.124484    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.124620    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.124620    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.124620    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.125805    8512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"511"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"504","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52250 chars]
	I0603 04:03:39.130312    8512 system_pods.go:59] 7 kube-system pods found
	I0603 04:03:39.130436    8512 system_pods.go:61] "coredns-7db6d8ff4d-89hqd" [8f729a75-fdf4-49a2-8fc6-d200958a5cba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 04:03:39.130466    8512 system_pods.go:61] "etcd-functional-754300" [628b2258-fc1d-4338-b400-204e834d977b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 04:03:39.130466    8512 system_pods.go:61] "kube-apiserver-functional-754300" [80857bae-91d0-466a-8332-84b6cacb9ac9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 04:03:39.130466    8512 system_pods.go:61] "kube-controller-manager-functional-754300" [5e23ca02-be30-431e-b95e-44185444871b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 04:03:39.130466    8512 system_pods.go:61] "kube-proxy-t5fmv" [331b5954-d9af-44df-9931-bd63f1440eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 04:03:39.130466    8512 system_pods.go:61] "kube-scheduler-functional-754300" [815ac9e3-c107-472d-97ae-401869c0635e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 04:03:39.130466    8512 system_pods.go:61] "storage-provisioner" [b33ccee1-44e1-4a45-b3bd-001b1944c26c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 04:03:39.130466    8512 system_pods.go:74] duration metric: took 19.5992ms to wait for pod list to return data ...
	I0603 04:03:39.130466    8512 node_conditions.go:102] verifying NodePressure condition ...
	I0603 04:03:39.131090    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes
	I0603 04:03:39.131090    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:39.131090    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:39.131141    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.131801    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:39.136053    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.136120    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.136152    8512 round_trippers.go:580]     Audit-Id: 0c2f6e3f-f129-4ca7-84e6-53cddf996e0b
	I0603 04:03:39.136152    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.136152    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.136152    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.136203    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.136464    8512 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"511"},"items":[{"metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0603 04:03:39.136648    8512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:03:39.137195    8512 node_conditions.go:123] node cpu capacity is 2
	I0603 04:03:39.137236    8512 node_conditions.go:105] duration metric: took 6.7699ms to run NodePressure ...
	I0603 04:03:39.137236    8512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 04:03:39.570929    8512 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 04:03:39.570962    8512 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 04:03:39.571082    8512 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 04:03:39.571297    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0603 04:03:39.571297    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:39.571297    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:39.571297    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.572248    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:39.572248    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.572248    8512 round_trippers.go:580]     Audit-Id: d9a1ab75-72fb-4766-a87e-cb8e349f753e
	I0603 04:03:39.572248    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.572248    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.572248    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.572248    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.572248    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.580333    8512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"509","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31290 chars]
	I0603 04:03:39.581859    8512 kubeadm.go:733] kubelet initialised
	I0603 04:03:39.581932    8512 kubeadm.go:734] duration metric: took 10.8497ms waiting for restarted kubelet to initialise ...
	I0603 04:03:39.581985    8512 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:03:39.582040    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:39.582112    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:39.582112    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:39.582137    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.582834    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:39.586483    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.586483    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.586483    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.586483    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.586483    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.586483    8512 round_trippers.go:580]     Audit-Id: ca7a3b74-7d36-444d-9582-c48ee0ba363a
	I0603 04:03:39.586483    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.587852    8512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"504","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52250 chars]
	I0603 04:03:39.590163    8512 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:39.590274    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:39.590274    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:39.590274    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:39.590274    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.590933    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:39.590933    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.590933    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.590933    8512 round_trippers.go:580]     Audit-Id: 56a3543e-e216-415f-b164-443fc362a849
	I0603 04:03:39.590933    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.590933    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.592578    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.592578    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.592787    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"504","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0603 04:03:39.592949    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:39.592949    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:39.592949    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:39.592949    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:39.593619    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:39.593619    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:39.593619    8512 round_trippers.go:580]     Audit-Id: 2e452d70-607c-415b-992d-6862dfcd4de4
	I0603 04:03:39.596197    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:39.596197    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:39.596197    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:39.596197    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:39.596197    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:39 GMT
	I0603 04:03:39.596256    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:40.104727    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:40.104797    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:40.104797    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:40.104797    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:40.105684    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:40.109317    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:40.109317    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:40 GMT
	I0603 04:03:40.109317    8512 round_trippers.go:580]     Audit-Id: 9d695c55-88b2-42c7-b4d4-0ab6b672b009
	I0603 04:03:40.109317    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:40.109317    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:40.109317    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:40.109317    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:40.109405    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"504","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0603 04:03:40.110361    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:40.110431    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:40.110431    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:40.110431    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:40.110721    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:40.110721    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:40.110721    8512 round_trippers.go:580]     Audit-Id: 2b53f324-b377-4382-880d-c77a680b26c2
	I0603 04:03:40.110721    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:40.110721    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:40.110721    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:40.113622    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:40.113622    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:40 GMT
	I0603 04:03:40.114784    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:40.596859    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:40.596859    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:40.596859    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:40.596859    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:40.597390    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:40.597390    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:40.601955    8512 round_trippers.go:580]     Audit-Id: 171fa6cc-0223-4235-a56f-a7e49865b73d
	I0603 04:03:40.601955    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:40.601955    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:40.601955    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:40.601955    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:40.601955    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:40 GMT
	I0603 04:03:40.602316    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:40.603556    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:40.603644    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:40.603644    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:40.603644    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:40.603999    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:40.606914    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:40.606963    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:40.606963    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:40 GMT
	I0603 04:03:40.606963    8512 round_trippers.go:580]     Audit-Id: 8187d1f3-b115-4768-8014-6d484eceff2d
	I0603 04:03:40.606963    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:40.606963    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:40.606963    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:40.606963    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:41.096653    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:41.097040    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:41.097040    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:41.097040    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:41.097887    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:41.100875    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:41.100875    8512 round_trippers.go:580]     Audit-Id: 5a8dc0ad-811f-4fcb-843f-224b3ab95c2a
	I0603 04:03:41.100875    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:41.100875    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:41.100875    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:41.100875    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:41.100951    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:41 GMT
	I0603 04:03:41.101065    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:41.101949    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:41.102044    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:41.102044    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:41.102044    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:41.102258    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:41.102258    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:41.105007    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:41.105007    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:41.105007    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:41.105007    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:41 GMT
	I0603 04:03:41.105007    8512 round_trippers.go:580]     Audit-Id: 485f5d86-4d23-4013-8f56-e4da55b00939
	I0603 04:03:41.105007    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:41.105337    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:41.597789    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:41.597855    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:41.597855    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:41.597855    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:41.601976    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:41.601976    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:41.601976    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:41 GMT
	I0603 04:03:41.601976    8512 round_trippers.go:580]     Audit-Id: 44762a00-fd6c-4d90-afcb-51c0ee3fe2b7
	I0603 04:03:41.601976    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:41.602063    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:41.602063    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:41.602063    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:41.602132    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:41.603179    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:41.603249    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:41.603249    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:41.603249    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:41.608232    8512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:03:41.608232    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:41.608232    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:41.608232    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:41.608232    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:41 GMT
	I0603 04:03:41.608232    8512 round_trippers.go:580]     Audit-Id: 7a9e0cb0-0f79-4435-81ae-866df2f2f8ae
	I0603 04:03:41.608232    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:41.608232    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:41.609025    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:41.609025    8512 pod_ready.go:102] pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace has status "Ready":"False"
	I0603 04:03:42.105740    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:42.105740    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:42.105740    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:42.105740    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:42.109661    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:42.109713    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:42.109713    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:42.109749    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:42.109749    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:42 GMT
	I0603 04:03:42.109749    8512 round_trippers.go:580]     Audit-Id: 0210bc0f-de9c-4de3-9cf5-00078205b672
	I0603 04:03:42.109804    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:42.109804    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:42.109828    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:42.110885    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:42.110934    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:42.110934    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:42.111000    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:42.111722    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:42.113891    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:42.113891    8512 round_trippers.go:580]     Audit-Id: 0f31148b-5c23-4990-8d9f-2c4af8e253d3
	I0603 04:03:42.113891    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:42.113927    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:42.113927    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:42.113927    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:42.113948    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:42 GMT
	I0603 04:03:42.113948    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:42.604096    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:42.604347    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:42.604347    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:42.604347    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:42.604602    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:42.604602    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:42.604602    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:42.604602    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:42.604602    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:42.604602    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:42.604602    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:42 GMT
	I0603 04:03:42.604602    8512 round_trippers.go:580]     Audit-Id: d1f8bc0a-d640-4239-a607-449ad1d14a6a
	I0603 04:03:42.609056    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:42.609611    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:42.609611    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:42.609611    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:42.609611    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:42.610339    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:42.610339    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:42.610339    8512 round_trippers.go:580]     Audit-Id: 273eadfe-e2d1-4655-b3df-ecb55ceb5a69
	I0603 04:03:42.610339    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:42.610339    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:42.610339    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:42.610339    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:42.610339    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:42 GMT
	I0603 04:03:42.613236    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:43.091656    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:43.091946    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:43.091946    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:43.091946    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:43.092708    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:43.092708    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:43.096881    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:43.096881    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:43 GMT
	I0603 04:03:43.096881    8512 round_trippers.go:580]     Audit-Id: f43ebc78-1778-4667-9346-78f119bb5885
	I0603 04:03:43.096881    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:43.096881    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:43.096881    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:43.097062    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:43.098714    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:43.098714    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:43.098714    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:43.098714    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:43.099038    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:43.099038    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:43.101324    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:43.101324    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:43.101324    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:43 GMT
	I0603 04:03:43.101324    8512 round_trippers.go:580]     Audit-Id: 199605d9-a48e-4cf6-ac76-068135649ff4
	I0603 04:03:43.101324    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:43.101324    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:43.101663    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:43.604778    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:43.604778    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:43.604778    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:43.604778    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:43.605307    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:43.605307    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:43.605307    8512 round_trippers.go:580]     Audit-Id: 2b4b39df-274e-475a-95ad-4c8a3f514440
	I0603 04:03:43.610284    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:43.610284    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:43.610284    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:43.610284    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:43.610284    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:43 GMT
	I0603 04:03:43.610630    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:43.611078    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:43.611078    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:43.611620    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:43.611620    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:43.612237    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:43.612237    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:43.612237    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:43.612237    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:43.612237    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:43.612237    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:43.612237    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:43 GMT
	I0603 04:03:43.612237    8512 round_trippers.go:580]     Audit-Id: c7f08c56-6312-4229-a29b-8aed8dcc1617
	I0603 04:03:43.615570    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:43.615570    8512 pod_ready.go:102] pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace has status "Ready":"False"
	I0603 04:03:44.105371    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:44.105445    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:44.105445    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:44.105445    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:44.105826    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:44.105826    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:44.105826    8512 round_trippers.go:580]     Audit-Id: f12c6c83-eb9f-4d2d-abfb-714f9725304d
	I0603 04:03:44.105826    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:44.105826    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:44.105826    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:44.105826    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:44.105826    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:44 GMT
	I0603 04:03:44.110739    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:44.111527    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:44.111609    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:44.111609    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:44.111609    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:44.111861    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:44.111861    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:44.111861    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:44.114439    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:44.114439    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:44.114439    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:44.114439    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:44 GMT
	I0603 04:03:44.114439    8512 round_trippers.go:580]     Audit-Id: 8d8daebc-7615-4ad3-b61b-9a65a728d107
	I0603 04:03:44.114728    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:44.591577    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:44.591781    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:44.591781    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:44.591781    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:44.592121    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:44.594863    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:44.594915    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:44.594915    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:44.594915    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:44.594915    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:44 GMT
	I0603 04:03:44.594915    8512 round_trippers.go:580]     Audit-Id: c4f64faa-23ec-4177-953e-7078e8a5a058
	I0603 04:03:44.594915    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:44.595202    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:44.595949    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:44.595994    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:44.595994    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:44.595994    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:44.601231    8512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:03:44.602180    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:44.602222    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:44.602254    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:44.602254    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:44.602254    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:44 GMT
	I0603 04:03:44.602321    8512 round_trippers.go:580]     Audit-Id: 12d3ebd3-6c60-43e2-baee-3db17c401ab1
	I0603 04:03:44.602321    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:44.602689    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:45.092848    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:45.092909    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:45.092909    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:45.092909    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:45.093257    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:45.097380    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:45.097380    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:45.097380    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:45.097380    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:45 GMT
	I0603 04:03:45.097380    8512 round_trippers.go:580]     Audit-Id: 00516085-2736-4dd3-9a1c-d85f47d6fb07
	I0603 04:03:45.097380    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:45.097380    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:45.097708    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:45.098661    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:45.098661    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:45.098729    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:45.098729    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:45.099012    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:45.101616    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:45.101616    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:45.101616    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:45 GMT
	I0603 04:03:45.101616    8512 round_trippers.go:580]     Audit-Id: 8208de25-ac18-420b-8f8e-6155e555865c
	I0603 04:03:45.101616    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:45.101616    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:45.101616    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:45.101884    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:45.601949    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:45.602249    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:45.602249    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:45.602323    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:45.602517    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:45.605961    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:45.605961    8512 round_trippers.go:580]     Audit-Id: 5036b30f-be71-4776-a41b-eb580a332c1d
	I0603 04:03:45.605961    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:45.605961    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:45.605961    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:45.605961    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:45.605961    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:45 GMT
	I0603 04:03:45.606181    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"515","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0603 04:03:45.606769    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:45.606769    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:45.606769    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:45.606769    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:45.607300    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:45.607300    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:45.609998    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:45 GMT
	I0603 04:03:45.609998    8512 round_trippers.go:580]     Audit-Id: abafe78f-ba0c-4ec8-a909-fcf358a65b6f
	I0603 04:03:45.609998    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:45.609998    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:45.609998    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:45.609998    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:45.610266    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:46.116966    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:46.117156    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:46.117156    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:46.117156    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:46.119867    8512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:03:46.120937    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:46.120937    8512 round_trippers.go:580]     Audit-Id: 4afb5e66-86d1-4977-82c6-58a9d73fa92a
	I0603 04:03:46.120937    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:46.120937    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:46.120937    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:46.120937    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:46.121030    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:46 GMT
	I0603 04:03:46.121371    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"571","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0603 04:03:46.122484    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:46.122484    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:46.122596    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:46.122596    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:46.124659    8512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:03:46.124659    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:46.124659    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:46.124659    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:46 GMT
	I0603 04:03:46.125552    8512 round_trippers.go:580]     Audit-Id: 6abf13a7-993b-41e1-9e57-1736a781c704
	I0603 04:03:46.125552    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:46.125552    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:46.125552    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:46.125845    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:46.127003    8512 pod_ready.go:92] pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:46.127003    8512 pod_ready.go:81] duration metric: took 6.5368305s for pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:46.127082    8512 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:46.127170    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:46.127170    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:46.127170    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:46.127256    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:46.127460    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:46.127460    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:46.127460    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:46 GMT
	I0603 04:03:46.130074    8512 round_trippers.go:580]     Audit-Id: 51e12559-df6f-407c-ab79-a5f76127b216
	I0603 04:03:46.130074    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:46.130074    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:46.130074    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:46.130074    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:46.130536    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"509","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6733 chars]
	I0603 04:03:46.131147    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:46.131147    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:46.131197    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:46.131197    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:46.131357    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:46.131357    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:46.131357    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:46 GMT
	I0603 04:03:46.134120    8512 round_trippers.go:580]     Audit-Id: a16a5d75-6c24-4765-875c-4cdb83515891
	I0603 04:03:46.134120    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:46.134120    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:46.134120    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:46.134120    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:46.134120    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:46.637315    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:46.637315    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:46.637315    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:46.637315    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:46.644157    8512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:03:46.644157    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:46.644157    8512 round_trippers.go:580]     Audit-Id: 0e0c4c6e-f6d5-456e-829b-77f5297a9527
	I0603 04:03:46.644157    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:46.644157    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:46.644157    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:46.644157    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:46.644157    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:46 GMT
	I0603 04:03:46.644157    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"509","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6733 chars]
	I0603 04:03:46.645776    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:46.645776    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:46.645776    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:46.645776    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:46.647018    8512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:03:46.649098    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:46.649098    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:46.649152    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:46.649152    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:46.649200    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:46 GMT
	I0603 04:03:46.649200    8512 round_trippers.go:580]     Audit-Id: 3bd5d0dc-1dad-4bee-a6aa-33444d21ac70
	I0603 04:03:46.649200    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:46.649200    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:47.141170    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:47.141559    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:47.141559    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:47.141652    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:47.141950    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:47.145551    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:47.145551    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:47.145551    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:47 GMT
	I0603 04:03:47.145551    8512 round_trippers.go:580]     Audit-Id: ec0a2ed4-4694-4b2a-893d-dba6f800a57c
	I0603 04:03:47.145551    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:47.145551    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:47.145551    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:47.145551    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"509","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6733 chars]
	I0603 04:03:47.146542    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:47.146599    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:47.146599    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:47.146599    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:47.146915    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:47.146915    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:47.146915    8512 round_trippers.go:580]     Audit-Id: bcd3dda7-1021-4e90-ad4c-430a5095ea70
	I0603 04:03:47.146915    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:47.146915    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:47.146915    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:47.146915    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:47.146915    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:47 GMT
	I0603 04:03:47.149882    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:47.634083    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:47.634326    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:47.634326    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:47.634326    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:47.640162    8512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:03:47.640226    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:47.640226    8512 round_trippers.go:580]     Audit-Id: 1cdff938-dd3d-4f2f-a377-1a356aaae0dc
	I0603 04:03:47.640226    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:47.640363    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:47.640505    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:47.640505    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:47.640505    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:47 GMT
	I0603 04:03:47.640764    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"509","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6733 chars]
	I0603 04:03:47.641096    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:47.641096    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:47.641096    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:47.641096    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:47.645928    8512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:03:47.645928    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:47.645928    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:47.646002    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:47.646002    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:47.646002    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:47 GMT
	I0603 04:03:47.646002    8512 round_trippers.go:580]     Audit-Id: 877053db-ed0c-47b7-b5eb-c437ebc57d9a
	I0603 04:03:47.646002    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:47.646002    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:48.140339    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:48.140339    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:48.140411    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:48.140411    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:48.140728    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:48.140728    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:48.144229    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:48.144229    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:48.144229    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:48.144229    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:48.144229    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:48 GMT
	I0603 04:03:48.144229    8512 round_trippers.go:580]     Audit-Id: 88c46c58-6145-4a6b-9348-ee99cfb0723f
	I0603 04:03:48.144451    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"576","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6509 chars]
	I0603 04:03:48.145148    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:48.145204    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:48.145204    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:48.145204    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:48.150959    8512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:03:48.157944    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:48.157944    8512 round_trippers.go:580]     Audit-Id: b9690ddd-29b7-4ca7-a152-4905426f7fe4
	I0603 04:03:48.157944    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:48.157944    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:48.157944    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:48.157944    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:48.157944    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:48 GMT
	I0603 04:03:48.158323    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:48.158872    8512 pod_ready.go:92] pod "etcd-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:48.158872    8512 pod_ready.go:81] duration metric: took 2.0317579s for pod "etcd-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:48.158872    8512 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:48.159011    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-754300
	I0603 04:03:48.159076    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:48.159076    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:48.159076    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:48.159193    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:48.162261    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:48.162261    8512 round_trippers.go:580]     Audit-Id: 746866d6-939c-4425-b6ec-edbdeeff46f6
	I0603 04:03:48.162261    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:48.162261    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:48.162261    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:48.162261    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:48.162374    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:48 GMT
	I0603 04:03:48.162619    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-754300","namespace":"kube-system","uid":"80857bae-91d0-466a-8332-84b6cacb9ac9","resourceVersion":"506","creationTimestamp":"2024-06-03T11:00:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.94.139:8441","kubernetes.io/config.hash":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.mirror":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.seen":"2024-06-03T11:00:53.345708024Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7953 chars]
	I0603 04:03:48.163523    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:48.163645    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:48.163645    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:48.163645    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:48.166198    8512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:03:48.166198    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:48.166198    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:48.166198    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:48.166198    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:48.166198    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:48 GMT
	I0603 04:03:48.166198    8512 round_trippers.go:580]     Audit-Id: 7f4c04a6-6b1a-4f18-8b1d-cd503ec0f83c
	I0603 04:03:48.166198    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:48.167029    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:48.674518    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-754300
	I0603 04:03:48.674518    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:48.674518    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:48.674518    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:48.675107    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:48.680296    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:48.680296    8512 round_trippers.go:580]     Audit-Id: 838a764e-fc84-4531-8bfb-8610bc4197e4
	I0603 04:03:48.680296    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:48.680296    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:48.680296    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:48.680296    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:48.680296    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:48 GMT
	I0603 04:03:48.680296    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-754300","namespace":"kube-system","uid":"80857bae-91d0-466a-8332-84b6cacb9ac9","resourceVersion":"506","creationTimestamp":"2024-06-03T11:00:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.94.139:8441","kubernetes.io/config.hash":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.mirror":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.seen":"2024-06-03T11:00:53.345708024Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7953 chars]
	I0603 04:03:48.681059    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:48.681584    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:48.681584    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:48.681584    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:48.682442    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:48.682442    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:48.682442    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:48.682442    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:48.682442    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:48.684474    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:48.684474    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:48 GMT
	I0603 04:03:48.684474    8512 round_trippers.go:580]     Audit-Id: ce4d31fd-5e21-4b68-aff5-ebe4550acd61
	I0603 04:03:48.684574    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:49.172666    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-754300
	I0603 04:03:49.172666    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.172747    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.172747    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.173119    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.178274    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.178274    8512 round_trippers.go:580]     Audit-Id: c102c088-4629-4d8c-afd5-76f0f479418f
	I0603 04:03:49.178274    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.178274    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.178274    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.178274    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.178274    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.178274    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-754300","namespace":"kube-system","uid":"80857bae-91d0-466a-8332-84b6cacb9ac9","resourceVersion":"506","creationTimestamp":"2024-06-03T11:00:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.94.139:8441","kubernetes.io/config.hash":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.mirror":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.seen":"2024-06-03T11:00:53.345708024Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7953 chars]
	I0603 04:03:49.179416    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:49.179416    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.179416    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.179416    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.179656    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.182195    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.182195    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.182281    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.182371    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.182419    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.182419    8512 round_trippers.go:580]     Audit-Id: 0255a96b-6519-4c80-9a1e-c30e3ad6e485
	I0603 04:03:49.182419    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.182419    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:49.686539    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-754300
	I0603 04:03:49.686756    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.686756    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.686756    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.687650    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.690486    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.690486    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.690486    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.690486    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.690486    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.690486    8512 round_trippers.go:580]     Audit-Id: b3c60b1d-4969-41ac-bcb1-b164dffe729e
	I0603 04:03:49.690486    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.690486    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-754300","namespace":"kube-system","uid":"80857bae-91d0-466a-8332-84b6cacb9ac9","resourceVersion":"581","creationTimestamp":"2024-06-03T11:00:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.94.139:8441","kubernetes.io/config.hash":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.mirror":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.seen":"2024-06-03T11:00:53.345708024Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7709 chars]
	I0603 04:03:49.691813    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:49.691902    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.691902    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.691902    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.705442    8512 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 04:03:49.705442    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.707512    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.707512    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.707512    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.707512    8512 round_trippers.go:580]     Audit-Id: 0ee969ef-e869-4987-a979-18e80b3111a3
	I0603 04:03:49.707512    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.707512    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.707712    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:49.708264    8512 pod_ready.go:92] pod "kube-apiserver-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:49.708264    8512 pod_ready.go:81] duration metric: took 1.54939s for pod "kube-apiserver-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.708264    8512 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.708373    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-754300
	I0603 04:03:49.708450    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.708450    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.708450    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.708695    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.708695    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.708695    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.708695    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.708695    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.708695    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.708695    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.708695    8512 round_trippers.go:580]     Audit-Id: 03eb4bc7-2c87-404a-9e23-90307d756869
	I0603 04:03:49.711990    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-754300","namespace":"kube-system","uid":"5e23ca02-be30-431e-b95e-44185444871b","resourceVersion":"572","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"718eddacf210d83186f38f84b80ed7d5","kubernetes.io/config.mirror":"718eddacf210d83186f38f84b80ed7d5","kubernetes.io/config.seen":"2024-06-03T11:01:00.756958764Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7612 chars]
	I0603 04:03:49.712177    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:49.712177    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.712177    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.712177    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.714673    8512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:03:49.714673    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.714673    8512 round_trippers.go:580]     Audit-Id: a8cc5107-51f6-4549-9296-955d838e1126
	I0603 04:03:49.714673    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.714673    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.715484    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.715484    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.715484    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.715612    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:49.715612    8512 pod_ready.go:92] pod "kube-controller-manager-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:49.715612    8512 pod_ready.go:81] duration metric: took 7.3481ms for pod "kube-controller-manager-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.715612    8512 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t5fmv" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.715612    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-proxy-t5fmv
	I0603 04:03:49.715612    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.715612    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.715612    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.716331    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.716331    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.716331    8512 round_trippers.go:580]     Audit-Id: 434e34a1-2850-463c-876f-044b8bb04491
	I0603 04:03:49.716331    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.716331    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.716331    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.716331    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.716331    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.718891    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t5fmv","generateName":"kube-proxy-","namespace":"kube-system","uid":"331b5954-d9af-44df-9931-bd63f1440eaf","resourceVersion":"516","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c0b72f2c-0a47-4786-885b-19cccd1f89b3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b72f2c-0a47-4786-885b-19cccd1f89b3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6287 chars]
	I0603 04:03:49.719769    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:49.719815    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.719878    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.719878    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.720662    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.720662    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.720662    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.720662    8512 round_trippers.go:580]     Audit-Id: 63e7683e-8313-4123-bfab-548e83e4571c
	I0603 04:03:49.720662    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.720662    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.722131    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.722131    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.722207    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:49.722207    8512 pod_ready.go:92] pod "kube-proxy-t5fmv" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:49.722207    8512 pod_ready.go:81] duration metric: took 6.5951ms for pod "kube-proxy-t5fmv" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.722207    8512 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.722870    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-754300
	I0603 04:03:49.722870    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.722870    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.722870    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.724783    8512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:03:49.724783    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.724783    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.725524    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.725524    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.725524    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.725524    8512 round_trippers.go:580]     Audit-Id: f0ccb7ad-c7d1-4ead-987d-06157a158307
	I0603 04:03:49.725524    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.725746    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-754300","namespace":"kube-system","uid":"815ac9e3-c107-472d-97ae-401869c0635e","resourceVersion":"579","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9e7f5fee8cc66f571f04a43b316af61d","kubernetes.io/config.mirror":"9e7f5fee8cc66f571f04a43b316af61d","kubernetes.io/config.seen":"2024-06-03T11:01:00.756950564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0603 04:03:49.726227    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:49.726296    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:49.726296    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:49.726296    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:49.726461    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:49.726461    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:49.726461    8512 round_trippers.go:580]     Audit-Id: ff08a7be-732c-4c47-a9fc-d6c50b6a3655
	I0603 04:03:49.726461    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:49.726461    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:49.728640    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:49.728640    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:49.728640    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:49 GMT
	I0603 04:03:49.728922    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:49.729043    8512 pod_ready.go:92] pod "kube-scheduler-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:49.729043    8512 pod_ready.go:81] duration metric: took 6.8356ms for pod "kube-scheduler-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:49.729043    8512 pod_ready.go:38] duration metric: took 10.1470425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:03:49.729043    8512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 04:03:49.750301    8512 command_runner.go:130] > -16
	I0603 04:03:49.750469    8512 ops.go:34] apiserver oom_adj: -16
	I0603 04:03:49.750469    8512 kubeadm.go:591] duration metric: took 19.8491614s to restartPrimaryControlPlane
	I0603 04:03:49.750469    8512 kubeadm.go:393] duration metric: took 19.9437901s to StartCluster
	I0603 04:03:49.750469    8512 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:03:49.750660    8512 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:03:49.752270    8512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:03:49.754032    8512 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.94.139 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:03:49.754032    8512 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 04:03:49.754586    8512 addons.go:69] Setting storage-provisioner=true in profile "functional-754300"
	I0603 04:03:49.757549    8512 out.go:177] * Verifying Kubernetes components...
	I0603 04:03:49.754690    8512 addons.go:234] Setting addon storage-provisioner=true in "functional-754300"
	W0603 04:03:49.757549    8512 addons.go:243] addon storage-provisioner should already be in state true
	I0603 04:03:49.754808    8512 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:03:49.754808    8512 addons.go:69] Setting default-storageclass=true in profile "functional-754300"
	I0603 04:03:49.762075    8512 host.go:66] Checking if "functional-754300" exists ...
	I0603 04:03:49.762320    8512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-754300"
	I0603 04:03:49.762903    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:49.763701    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:49.776065    8512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:03:50.054387    8512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:03:50.088332    8512 node_ready.go:35] waiting up to 6m0s for node "functional-754300" to be "Ready" ...
	I0603 04:03:50.088453    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:50.088453    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.088453    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.088453    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.092112    8512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:03:50.092190    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.092190    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.092190    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.092190    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.092190    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.092190    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.092190    8512 round_trippers.go:580]     Audit-Id: fe6edf81-61b7-4b60-8d44-36071189b258
	I0603 04:03:50.092799    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:50.093644    8512 node_ready.go:49] node "functional-754300" has status "Ready":"True"
	I0603 04:03:50.093644    8512 node_ready.go:38] duration metric: took 5.2244ms for node "functional-754300" to be "Ready" ...
	I0603 04:03:50.093644    8512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:03:50.093644    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:50.093644    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.093644    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.093644    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.094928    8512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:03:50.098508    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.098508    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.098508    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.098508    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.098508    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.098508    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.098508    8512 round_trippers.go:580]     Audit-Id: d626ab17-cb53-4221-b824-8c3b2ccda289
	I0603 04:03:50.099971    8512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"581"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"571","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50823 chars]
	I0603 04:03:50.106659    8512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:50.106659    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-89hqd
	I0603 04:03:50.106659    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.106659    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.106659    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.111447    8512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:03:50.111546    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.111546    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.111546    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.111546    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.111546    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.111546    8512 round_trippers.go:580]     Audit-Id: 91024843-41ff-40e7-bd8a-73450182f36e
	I0603 04:03:50.111546    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.111546    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"571","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0603 04:03:50.147717    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:50.147717    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.147921    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.147921    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.152645    8512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:03:50.152737    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.152737    8512 round_trippers.go:580]     Audit-Id: 14b90ede-bd0f-4745-9a4b-050426e30f93
	I0603 04:03:50.152807    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.152807    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.152807    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.152807    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.152807    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.154830    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:50.156491    8512 pod_ready.go:92] pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:50.156573    8512 pod_ready.go:81] duration metric: took 49.9131ms for pod "coredns-7db6d8ff4d-89hqd" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:50.156655    8512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:50.355069    8512 request.go:629] Waited for 197.9705ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:50.355447    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/etcd-functional-754300
	I0603 04:03:50.355447    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.355447    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.355447    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.360267    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:50.360267    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.360267    8512 round_trippers.go:580]     Audit-Id: 6058e6f7-d806-49da-a329-05ae4c4bfeef
	I0603 04:03:50.360267    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.360267    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.360267    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.360267    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.360267    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.360585    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-754300","namespace":"kube-system","uid":"628b2258-fc1d-4338-b400-204e834d977b","resourceVersion":"576","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.94.139:2379","kubernetes.io/config.hash":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.mirror":"9a5cd856bb44bfe5e66fcfd245ef8c9a","kubernetes.io/config.seen":"2024-06-03T11:01:00.756956264Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6509 chars]
	I0603 04:03:50.556291    8512 request.go:629] Waited for 194.7074ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:50.556393    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:50.556393    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.556476    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.556476    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.560122    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:50.560122    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.560122    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.560122    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.560196    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.560196    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.560196    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.560196    8512 round_trippers.go:580]     Audit-Id: d6219c85-f8c6-421b-967b-078e02f481d2
	I0603 04:03:50.560196    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:50.562545    8512 pod_ready.go:92] pod "etcd-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:50.562598    8512 pod_ready.go:81] duration metric: took 405.9424ms for pod "etcd-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:50.562598    8512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:50.742993    8512 request.go:629] Waited for 180.1833ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-754300
	I0603 04:03:50.743205    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-754300
	I0603 04:03:50.743205    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.743205    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.743205    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.743940    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:50.746150    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.746150    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.746150    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.746150    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.746238    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.746273    8512 round_trippers.go:580]     Audit-Id: 413bba89-ddb2-4923-b9de-4630c1e0feaa
	I0603 04:03:50.746290    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.746565    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-754300","namespace":"kube-system","uid":"80857bae-91d0-466a-8332-84b6cacb9ac9","resourceVersion":"581","creationTimestamp":"2024-06-03T11:00:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.94.139:8441","kubernetes.io/config.hash":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.mirror":"16c4acfafc53a8a35478d12ae9a61076","kubernetes.io/config.seen":"2024-06-03T11:00:53.345708024Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7709 chars]
	I0603 04:03:50.953004    8512 request.go:629] Waited for 205.3897ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:50.953004    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:50.953004    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:50.953004    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:50.953004    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:50.957722    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:50.957722    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:50.957722    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:50.957722    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:50.957722    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:50 GMT
	I0603 04:03:50.957722    8512 round_trippers.go:580]     Audit-Id: 14bde8cd-259f-4f35-a9d5-4ef1b6d12532
	I0603 04:03:50.957722    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:50.957722    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:50.957722    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:50.958559    8512 pod_ready.go:92] pod "kube-apiserver-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:50.958559    8512 pod_ready.go:81] duration metric: took 395.9603ms for pod "kube-apiserver-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:50.958559    8512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:51.143730    8512 request.go:629] Waited for 184.9563ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-754300
	I0603 04:03:51.143866    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-754300
	I0603 04:03:51.143866    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:51.143935    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:51.143935    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:51.144541    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:51.147584    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:51.147584    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:51.147584    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:51.147584    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:51.147584    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:51.147678    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:51 GMT
	I0603 04:03:51.147678    8512 round_trippers.go:580]     Audit-Id: 67c9761f-24b8-4886-9bc2-d6fcb1ba5c31
	I0603 04:03:51.147801    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-754300","namespace":"kube-system","uid":"5e23ca02-be30-431e-b95e-44185444871b","resourceVersion":"572","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"718eddacf210d83186f38f84b80ed7d5","kubernetes.io/config.mirror":"718eddacf210d83186f38f84b80ed7d5","kubernetes.io/config.seen":"2024-06-03T11:01:00.756958764Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7612 chars]
	I0603 04:03:51.341313    8512 request.go:629] Waited for 192.6939ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:51.341763    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:51.341763    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:51.341763    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:51.341763    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:51.342497    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:51.353292    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:51.353292    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:51.353292    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:51.353292    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:51 GMT
	I0603 04:03:51.353292    8512 round_trippers.go:580]     Audit-Id: 52e5b848-9288-4296-af16-1ccd42904aa7
	I0603 04:03:51.353292    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:51.353292    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:51.353938    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:51.354052    8512 pod_ready.go:92] pod "kube-controller-manager-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:51.354052    8512 pod_ready.go:81] duration metric: took 395.4932ms for pod "kube-controller-manager-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:51.354052    8512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t5fmv" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:51.553829    8512 request.go:629] Waited for 199.4744ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-proxy-t5fmv
	I0603 04:03:51.553893    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-proxy-t5fmv
	I0603 04:03:51.553893    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:51.553893    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:51.553893    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:51.554622    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:51.554622    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:51.554622    8512 round_trippers.go:580]     Audit-Id: 79f5a17a-d577-4454-95b1-dbda7881f9b5
	I0603 04:03:51.554622    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:51.554622    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:51.554622    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:51.554622    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:51.554622    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:51 GMT
	I0603 04:03:51.554622    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t5fmv","generateName":"kube-proxy-","namespace":"kube-system","uid":"331b5954-d9af-44df-9931-bd63f1440eaf","resourceVersion":"516","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c0b72f2c-0a47-4786-885b-19cccd1f89b3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b72f2c-0a47-4786-885b-19cccd1f89b3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6287 chars]
	I0603 04:03:51.743122    8512 request.go:629] Waited for 188.339ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:51.743200    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:51.743200    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:51.743287    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:51.743347    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:51.744078    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:51.747252    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:51.747252    8512 round_trippers.go:580]     Audit-Id: 0c3ded2f-c0e5-4626-ba75-eac30ac89f45
	I0603 04:03:51.747252    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:51.747252    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:51.747252    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:51.747252    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:51.747252    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:51 GMT
	I0603 04:03:51.747252    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:51.748030    8512 pod_ready.go:92] pod "kube-proxy-t5fmv" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:51.748030    8512 pod_ready.go:81] duration metric: took 393.9774ms for pod "kube-proxy-t5fmv" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:51.748030    8512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:51.944388    8512 request.go:629] Waited for 196.3578ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-754300
	I0603 04:03:51.944558    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-754300
	I0603 04:03:51.944558    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:51.944558    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:51.944558    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:51.944558    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:51.944558    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:51.945274    8512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:03:51.945895    8512 kapi.go:59] client config for functional-754300: &rest.Config{Host:"https://172.17.94.139:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-754300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-754300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 04:03:51.946615    8512 addons.go:234] Setting addon default-storageclass=true in "functional-754300"
	W0603 04:03:51.946615    8512 addons.go:243] addon default-storageclass should already be in state true
	I0603 04:03:51.946615    8512 host.go:66] Checking if "functional-754300" exists ...
	I0603 04:03:51.947142    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:51.947260    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:51.951431    8512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 04:03:51.948110    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:51.950041    8512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:03:51.953436    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:51.953436    8512 round_trippers.go:580]     Audit-Id: 397ba2e0-3665-4bcb-aef9-53336954a278
	I0603 04:03:51.953436    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:51.953436    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:51.953436    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:51.953436    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:51.953436    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:51 GMT
	I0603 04:03:51.953744    8512 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 04:03:51.953744    8512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 04:03:51.953744    8512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-754300","namespace":"kube-system","uid":"815ac9e3-c107-472d-97ae-401869c0635e","resourceVersion":"579","creationTimestamp":"2024-06-03T11:01:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9e7f5fee8cc66f571f04a43b316af61d","kubernetes.io/config.mirror":"9e7f5fee8cc66f571f04a43b316af61d","kubernetes.io/config.seen":"2024-06-03T11:01:00.756950564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0603 04:03:51.953744    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:52.153571    8512 request.go:629] Waited for 198.9261ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:52.153757    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes/functional-754300
	I0603 04:03:52.153873    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:52.153873    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:52.153873    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:52.157555    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:52.157555    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:52.157555    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:52.157555    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:52.157555    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:52.157555    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:52 GMT
	I0603 04:03:52.157555    8512 round_trippers.go:580]     Audit-Id: c43ab3b8-3791-452c-812d-2b5d38ff3826
	I0603 04:03:52.157555    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:52.157555    8512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T11:00:57Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0603 04:03:52.158454    8512 pod_ready.go:92] pod "kube-scheduler-functional-754300" in "kube-system" namespace has status "Ready":"True"
	I0603 04:03:52.158541    8512 pod_ready.go:81] duration metric: took 410.5097ms for pod "kube-scheduler-functional-754300" in "kube-system" namespace to be "Ready" ...
	I0603 04:03:52.158541    8512 pod_ready.go:38] duration metric: took 2.064894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:03:52.158669    8512 api_server.go:52] waiting for apiserver process to appear ...
	I0603 04:03:52.170023    8512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:03:52.198219    8512 command_runner.go:130] > 5737
	I0603 04:03:52.198328    8512 api_server.go:72] duration metric: took 2.444292s to wait for apiserver process to appear ...
	I0603 04:03:52.198380    8512 api_server.go:88] waiting for apiserver healthz status ...
	I0603 04:03:52.198414    8512 api_server.go:253] Checking apiserver healthz at https://172.17.94.139:8441/healthz ...
	I0603 04:03:52.208231    8512 api_server.go:279] https://172.17.94.139:8441/healthz returned 200:
	ok
	I0603 04:03:52.208231    8512 round_trippers.go:463] GET https://172.17.94.139:8441/version
	I0603 04:03:52.208231    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:52.208231    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:52.208231    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:52.209241    8512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:03:52.209241    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:52.209241    8512 round_trippers.go:580]     Audit-Id: 4895f256-5834-4e8e-b688-ca84c985eb80
	I0603 04:03:52.209241    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:52.209241    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:52.209241    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:52.209241    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:52.210345    8512 round_trippers.go:580]     Content-Length: 263
	I0603 04:03:52.210345    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:52 GMT
	I0603 04:03:52.210345    8512 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 04:03:52.210468    8512 api_server.go:141] control plane version: v1.30.1
	I0603 04:03:52.210468    8512 api_server.go:131] duration metric: took 12.0538ms to wait for apiserver health ...
	I0603 04:03:52.210547    8512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 04:03:52.348922    8512 request.go:629] Waited for 138.3746ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:52.349157    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:52.349430    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:52.349430    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:52.349430    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:52.349886    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:52.357518    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:52.357576    8512 round_trippers.go:580]     Audit-Id: bad1fd63-39f2-4831-833e-f3689d65164b
	I0603 04:03:52.357576    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:52.357631    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:52.357631    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:52.357681    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:52.357681    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:52 GMT
	I0603 04:03:52.359786    8512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"586"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"571","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50823 chars]
	I0603 04:03:52.363708    8512 system_pods.go:59] 7 kube-system pods found
	I0603 04:03:52.364252    8512 system_pods.go:61] "coredns-7db6d8ff4d-89hqd" [8f729a75-fdf4-49a2-8fc6-d200958a5cba] Running
	I0603 04:03:52.364252    8512 system_pods.go:61] "etcd-functional-754300" [628b2258-fc1d-4338-b400-204e834d977b] Running
	I0603 04:03:52.364252    8512 system_pods.go:61] "kube-apiserver-functional-754300" [80857bae-91d0-466a-8332-84b6cacb9ac9] Running
	I0603 04:03:52.364252    8512 system_pods.go:61] "kube-controller-manager-functional-754300" [5e23ca02-be30-431e-b95e-44185444871b] Running
	I0603 04:03:52.364336    8512 system_pods.go:61] "kube-proxy-t5fmv" [331b5954-d9af-44df-9931-bd63f1440eaf] Running
	I0603 04:03:52.364336    8512 system_pods.go:61] "kube-scheduler-functional-754300" [815ac9e3-c107-472d-97ae-401869c0635e] Running
	I0603 04:03:52.364392    8512 system_pods.go:61] "storage-provisioner" [b33ccee1-44e1-4a45-b3bd-001b1944c26c] Running
	I0603 04:03:52.364392    8512 system_pods.go:74] duration metric: took 153.8445ms to wait for pod list to return data ...
	I0603 04:03:52.364447    8512 default_sa.go:34] waiting for default service account to be created ...
	I0603 04:03:52.545210    8512 request.go:629] Waited for 180.5073ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/default/serviceaccounts
	I0603 04:03:52.545312    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/default/serviceaccounts
	I0603 04:03:52.545503    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:52.545503    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:52.545503    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:52.546304    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:52.549526    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:52.549526    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:52.549641    8512 round_trippers.go:580]     Content-Length: 261
	I0603 04:03:52.549641    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:52 GMT
	I0603 04:03:52.549641    8512 round_trippers.go:580]     Audit-Id: fb60593f-a431-43ec-9119-d433465a471d
	I0603 04:03:52.549641    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:52.549720    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:52.549720    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:52.549720    8512 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"586"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d0f9a0ae-96fc-40e9-910c-bb9885ea1ec3","resourceVersion":"312","creationTimestamp":"2024-06-03T11:01:14Z"}}]}
	I0603 04:03:52.549720    8512 default_sa.go:45] found service account: "default"
	I0603 04:03:52.549720    8512 default_sa.go:55] duration metric: took 185.2732ms for default service account to be created ...
	I0603 04:03:52.549720    8512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 04:03:52.750896    8512 request.go:629] Waited for 201.0216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:52.751085    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/namespaces/kube-system/pods
	I0603 04:03:52.751085    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:52.751085    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:52.751085    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:52.756615    8512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:03:52.756615    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:52.756698    8512 round_trippers.go:580]     Audit-Id: be865027-4710-4606-a22e-616e6f788f5f
	I0603 04:03:52.756698    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:52.756698    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:52.756698    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:52.756698    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:52.756698    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:52 GMT
	I0603 04:03:52.758012    8512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"586"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-89hqd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8f729a75-fdf4-49a2-8fc6-d200958a5cba","resourceVersion":"571","creationTimestamp":"2024-06-03T11:01:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5bb54014-bd04-46d1-8bec-281a57f7357b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T11:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bb54014-bd04-46d1-8bec-281a57f7357b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50823 chars]
	I0603 04:03:52.760737    8512 system_pods.go:86] 7 kube-system pods found
	I0603 04:03:52.760907    8512 system_pods.go:89] "coredns-7db6d8ff4d-89hqd" [8f729a75-fdf4-49a2-8fc6-d200958a5cba] Running
	I0603 04:03:52.760907    8512 system_pods.go:89] "etcd-functional-754300" [628b2258-fc1d-4338-b400-204e834d977b] Running
	I0603 04:03:52.760907    8512 system_pods.go:89] "kube-apiserver-functional-754300" [80857bae-91d0-466a-8332-84b6cacb9ac9] Running
	I0603 04:03:52.760907    8512 system_pods.go:89] "kube-controller-manager-functional-754300" [5e23ca02-be30-431e-b95e-44185444871b] Running
	I0603 04:03:52.760907    8512 system_pods.go:89] "kube-proxy-t5fmv" [331b5954-d9af-44df-9931-bd63f1440eaf] Running
	I0603 04:03:52.760907    8512 system_pods.go:89] "kube-scheduler-functional-754300" [815ac9e3-c107-472d-97ae-401869c0635e] Running
	I0603 04:03:52.760907    8512 system_pods.go:89] "storage-provisioner" [b33ccee1-44e1-4a45-b3bd-001b1944c26c] Running
	I0603 04:03:52.761011    8512 system_pods.go:126] duration metric: took 211.2908ms to wait for k8s-apps to be running ...
	I0603 04:03:52.761053    8512 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 04:03:52.772575    8512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:03:52.794417    8512 system_svc.go:56] duration metric: took 33.3645ms WaitForService to wait for kubelet
	I0603 04:03:52.794417    8512 kubeadm.go:576] duration metric: took 3.0403802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:03:52.794417    8512 node_conditions.go:102] verifying NodePressure condition ...
	I0603 04:03:52.947473    8512 request.go:629] Waited for 152.7339ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.94.139:8441/api/v1/nodes
	I0603 04:03:52.947581    8512 round_trippers.go:463] GET https://172.17.94.139:8441/api/v1/nodes
	I0603 04:03:52.947581    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:52.947581    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:52.947581    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:52.953821    8512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:03:52.953821    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:52.953821    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:52 GMT
	I0603 04:03:52.953821    8512 round_trippers.go:580]     Audit-Id: 17dbf350-8fc3-4385-94db-8e8ff542d4d7
	I0603 04:03:52.953821    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:52.953821    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:52.953821    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:52.953821    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:52.954439    8512 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"586"},"items":[{"metadata":{"name":"functional-754300","uid":"d5ecc5fe-256c-440e-b747-27f281a1b922","resourceVersion":"503","creationTimestamp":"2024-06-03T11:00:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-754300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"functional-754300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T04_01_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0603 04:03:52.954439    8512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:03:52.954439    8512 node_conditions.go:123] node cpu capacity is 2
	I0603 04:03:52.955026    8512 node_conditions.go:105] duration metric: took 160.6086ms to run NodePressure ...
	I0603 04:03:52.955082    8512 start.go:240] waiting for startup goroutines ...
	I0603 04:03:54.155043    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:54.155043    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:54.167188    8512 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 04:03:54.167372    8512 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 04:03:54.167372    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:54.167372    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
	I0603 04:03:54.167460    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:54.167493    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:03:56.342476    8512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:03:56.342476    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:56.353708    8512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
	I0603 04:03:56.737184    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:03:56.750454    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:56.750670    8512 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
	I0603 04:03:56.890121    8512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 04:03:57.701916    8512 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0603 04:03:57.702126    8512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0603 04:03:57.702126    8512 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0603 04:03:57.702191    8512 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0603 04:03:57.702230    8512 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0603 04:03:57.702230    8512 command_runner.go:130] > pod/storage-provisioner configured
	I0603 04:03:58.865132    8512 main.go:141] libmachine: [stdout =====>] : 172.17.94.139
	
	I0603 04:03:58.865132    8512 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:03:58.865407    8512 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
	I0603 04:03:59.013506    8512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 04:03:59.163522    8512 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0603 04:03:59.165296    8512 round_trippers.go:463] GET https://172.17.94.139:8441/apis/storage.k8s.io/v1/storageclasses
	I0603 04:03:59.165395    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:59.165395    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:59.165500    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:59.168904    8512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:03:59.168904    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:59.168904    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:59.168904    8512 round_trippers.go:580]     Content-Length: 1273
	I0603 04:03:59.168904    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:59 GMT
	I0603 04:03:59.168904    8512 round_trippers.go:580]     Audit-Id: e18fe319-4301-4bf9-9042-3ec8bdb3dfb7
	I0603 04:03:59.168904    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:59.169629    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:59.169629    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:59.169689    8512 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"592"},"items":[{"metadata":{"name":"standard","uid":"a89ffbbd-5489-4597-9910-2e0f0713ed7d","resourceVersion":"404","creationTimestamp":"2024-06-03T11:01:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T11:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0603 04:03:59.170009    8512 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a89ffbbd-5489-4597-9910-2e0f0713ed7d","resourceVersion":"404","creationTimestamp":"2024-06-03T11:01:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T11:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 04:03:59.170612    8512 round_trippers.go:463] PUT https://172.17.94.139:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 04:03:59.170612    8512 round_trippers.go:469] Request Headers:
	I0603 04:03:59.170612    8512 round_trippers.go:473]     Content-Type: application/json
	I0603 04:03:59.170705    8512 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:03:59.170705    8512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:03:59.171581    8512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 04:03:59.171581    8512 round_trippers.go:577] Response Headers:
	I0603 04:03:59.171581    8512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 04:03:59.176118    8512 round_trippers.go:580]     Content-Type: application/json
	I0603 04:03:59.176118    8512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: caa9004d-4363-45cc-88b5-0019640cb640
	I0603 04:03:59.176118    8512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3dd0deb5-e3d0-4889-bf33-157524de1928
	I0603 04:03:59.176118    8512 round_trippers.go:580]     Content-Length: 1220
	I0603 04:03:59.176118    8512 round_trippers.go:580]     Date: Mon, 03 Jun 2024 11:03:59 GMT
	I0603 04:03:59.176118    8512 round_trippers.go:580]     Audit-Id: 4c4357b9-994a-4c37-8460-a29d2498e1ee
	I0603 04:03:59.176188    8512 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a89ffbbd-5489-4597-9910-2e0f0713ed7d","resourceVersion":"404","creationTimestamp":"2024-06-03T11:01:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T11:01:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 04:03:59.180510    8512 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 04:03:59.183064    8512 addons.go:510] duration metric: took 9.4290171s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 04:03:59.183137    8512 start.go:245] waiting for cluster config update ...
	I0603 04:03:59.183314    8512 start.go:254] writing updated cluster config ...
	I0603 04:03:59.196080    8512 ssh_runner.go:195] Run: rm -f paused
	I0603 04:03:59.347256    8512 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 04:03:59.350861    8512 out.go:177] * Done! kubectl is now configured to use "functional-754300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.419282449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.419409948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.491132017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.491207016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.491218216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.491305315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.507386596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.507452896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.507468796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.507549195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 cri-dockerd[4478]: time="2024-06-03T11:03:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ae3c04732e450285bfacec67a6529b0f9a38824ccfb51f0d6b1bcac70c52970/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 11:03:38 functional-754300 cri-dockerd[4478]: time="2024-06-03T11:03:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f4d7d5d2bf17a2254a75d7af6f1d11ed29373aa7e4f57b11662250057aad19aa/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.800050430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.802341613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.802444412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.803865502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.878174851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.878567949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.878668548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:38 functional-754300 cri-dockerd[4478]: time="2024-06-03T11:03:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9bca56561d1dc7aaa5f6dfcd73dbc5b09871a42c48d89712eb9dc3d5a357a294/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 11:03:38 functional-754300 dockerd[4257]: time="2024-06-03T11:03:38.884494805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:39 functional-754300 dockerd[4257]: time="2024-06-03T11:03:39.316806829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:03:39 functional-754300 dockerd[4257]: time="2024-06-03T11:03:39.316874929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:03:39 functional-754300 dockerd[4257]: time="2024-06-03T11:03:39.316887429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:03:39 functional-754300 dockerd[4257]: time="2024-06-03T11:03:39.317016929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8c19255b3057       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   9bca56561d1dc       coredns-7db6d8ff4d-89hqd
	789de4a5df4c4       747097150317f       2 minutes ago       Running             kube-proxy                2                   f4d7d5d2bf17a       kube-proxy-t5fmv
	d8e4b3d4e7cbe       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   9ae3c04732e45       storage-provisioner
	97e715cadb33a       a52dc94f0a912       2 minutes ago       Running             kube-scheduler            2                   8231b439cc3ea       kube-scheduler-functional-754300
	7d32fa350c1e4       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   83247abbdbe3c       etcd-functional-754300
	bff16c297dc8e       25a1387cdab82       2 minutes ago       Running             kube-controller-manager   2                   181bd4db76a6a       kube-controller-manager-functional-754300
	9233eeb120011       91be940803172       2 minutes ago       Running             kube-apiserver            2                   bb029a4230289       kube-apiserver-functional-754300
	acfdea631f68b       91be940803172       2 minutes ago       Created             kube-apiserver            1                   01457c35697cf       kube-apiserver-functional-754300
	d761cba5c079e       6e38f40d628db       2 minutes ago       Created             storage-provisioner       1                   bf568585498f9       storage-provisioner
	799f9027199dd       3861cfcd7c04c       2 minutes ago       Created             etcd                      1                   5a3784043c01c       etcd-functional-754300
	af1cd54ffbac7       25a1387cdab82       2 minutes ago       Created             kube-controller-manager   1                   f3c25fe55ebd4       kube-controller-manager-functional-754300
	ad2f486769148       a52dc94f0a912       2 minutes ago       Created             kube-scheduler            1                   56a74ddb22b0c       kube-scheduler-functional-754300
	78af648444afc       747097150317f       2 minutes ago       Created             kube-proxy                1                   a9d418f63f2a7       kube-proxy-t5fmv
	2b29438c873ea       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   ac9f1dba44eee       coredns-7db6d8ff4d-89hqd
	
	
	==> coredns [2b29438c873e] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[270898174]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:01:16.640) (total time: 30001ms):
	Trace[270898174]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:01:46.641)
	Trace[270898174]: [30.001581409s] [30.001581409s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[897828615]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:01:16.641) (total time: 30001ms):
	Trace[897828615]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:01:46.642)
	Trace[897828615]: [30.001221715s] [30.001221715s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[152848149]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 11:01:16.641) (total time: 30001ms):
	Trace[152848149]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:01:46.641)
	Trace[152848149]: [30.001683113s] [30.001683113s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f8c19255b305] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42597 - 8765 "HINFO IN 7158883241792776017.1719258405260497684. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044965561s
	
	
	==> describe nodes <==
	Name:               functional-754300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-754300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=functional-754300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T04_01_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:00:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-754300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:05:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:05:39 +0000   Mon, 03 Jun 2024 11:00:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:05:39 +0000   Mon, 03 Jun 2024 11:00:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:05:39 +0000   Mon, 03 Jun 2024 11:00:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:05:39 +0000   Mon, 03 Jun 2024 11:01:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.94.139
	  Hostname:    functional-754300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 4262d41d7e7d4fa09a655cb0de83dd86
	  System UUID:                bcb9db8c-9222-3649-955b-f154c8c9793e
	  Boot ID:                    9b3255c2-8739-4c12-ae4e-b678a7ca8dac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-89hqd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m26s
	  kube-system                 etcd-functional-754300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-apiserver-functional-754300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-functional-754300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-proxy-t5fmv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-functional-754300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m24s                kube-proxy       
	  Normal  Starting                 2m1s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s                kubelet          Node functional-754300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s                kubelet          Node functional-754300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s                kubelet          Node functional-754300 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m40s                kubelet          Starting kubelet.
	  Normal  NodeReady                4m38s                kubelet          Node functional-754300 status is now: NodeReady
	  Normal  RegisteredNode           4m27s                node-controller  Node functional-754300 event: Registered Node functional-754300 in Controller
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m8s)  kubelet          Node functional-754300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m8s)  kubelet          Node functional-754300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m8s)  kubelet          Node functional-754300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s                 node-controller  Node functional-754300 event: Registered Node functional-754300 in Controller
	
	
	==> dmesg <==
	[  +0.640987] systemd-fstab-generator[1524]: Ignoring "noauto" option for root device
	[  +5.934285] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.109811] kauditd_printk_skb: 51 callbacks suppressed
	[  +7.553380] systemd-fstab-generator[2117]: Ignoring "noauto" option for root device
	[  +0.131757] kauditd_printk_skb: 62 callbacks suppressed
	[Jun 3 11:01] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.208278] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.397119] kauditd_printk_skb: 90 callbacks suppressed
	[ +32.802234] kauditd_printk_skb: 10 callbacks suppressed
	[Jun 3 11:03] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.627431] systemd-fstab-generator[3808]: Ignoring "noauto" option for root device
	[  +0.244920] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.273341] systemd-fstab-generator[3850]: Ignoring "noauto" option for root device
	[  +5.278937] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.059014] systemd-fstab-generator[4426]: Ignoring "noauto" option for root device
	[  +0.194231] systemd-fstab-generator[4438]: Ignoring "noauto" option for root device
	[  +0.193108] systemd-fstab-generator[4451]: Ignoring "noauto" option for root device
	[  +0.282704] systemd-fstab-generator[4465]: Ignoring "noauto" option for root device
	[  +0.851158] systemd-fstab-generator[4625]: Ignoring "noauto" option for root device
	[  +1.009706] kauditd_printk_skb: 142 callbacks suppressed
	[  +3.065273] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +1.941726] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.059616] kauditd_printk_skb: 37 callbacks suppressed
	[ +10.327395] systemd-fstab-generator[6374]: Ignoring "noauto" option for root device
	[Jun 3 11:04] hrtimer: interrupt took 767202 ns
	
	
	==> etcd [799f9027199d] <==
	
	
	==> etcd [7d32fa350c1e] <==
	{"level":"info","ts":"2024-06-03T11:03:34.791066Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T11:03:34.791905Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-03T11:03:34.799268Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T11:03:34.801744Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dfee222669f40164","initial-advertise-peer-urls":["https://172.17.94.139:2380"],"listen-peer-urls":["https://172.17.94.139:2380"],"advertise-client-urls":["https://172.17.94.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.94.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T11:03:34.802174Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T11:03:34.79948Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.94.139:2380"}
	{"level":"info","ts":"2024-06-03T11:03:34.802825Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.94.139:2380"}
	{"level":"info","ts":"2024-06-03T11:03:34.800926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 switched to configuration voters=(16135872063296766308)"}
	{"level":"info","ts":"2024-06-03T11:03:34.803236Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ccb933f2b3ff4100","local-member-id":"dfee222669f40164","added-peer-id":"dfee222669f40164","added-peer-peer-urls":["https://172.17.94.139:2380"]}
	{"level":"info","ts":"2024-06-03T11:03:34.803881Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ccb933f2b3ff4100","local-member-id":"dfee222669f40164","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:03:34.808357Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T11:03:35.950226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T11:03:35.950334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T11:03:35.950533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 received MsgPreVoteResp from dfee222669f40164 at term 2"}
	{"level":"info","ts":"2024-06-03T11:03:35.950661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T11:03:35.950772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 received MsgVoteResp from dfee222669f40164 at term 3"}
	{"level":"info","ts":"2024-06-03T11:03:35.951026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfee222669f40164 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T11:03:35.9511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfee222669f40164 elected leader dfee222669f40164 at term 3"}
	{"level":"info","ts":"2024-06-03T11:03:35.96371Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"dfee222669f40164","local-member-attributes":"{Name:functional-754300 ClientURLs:[https://172.17.94.139:2379]}","request-path":"/0/members/dfee222669f40164/attributes","cluster-id":"ccb933f2b3ff4100","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T11:03:35.963714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:03:35.964071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T11:03:35.964372Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T11:03:35.963736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T11:03:35.96635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T11:03:35.967697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.94.139:2379"}
	
	
	==> kernel <==
	 11:05:41 up 6 min,  0 users,  load average: 0.39, 0.54, 0.29
	Linux functional-754300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9233eeb12001] <==
	I0603 11:03:37.474738       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 11:03:37.477658       1 aggregator.go:165] initial CRD sync complete...
	I0603 11:03:37.477861       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 11:03:37.478135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 11:03:37.509088       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 11:03:37.512836       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 11:03:37.513161       1 policy_source.go:224] refreshing policies
	I0603 11:03:37.530766       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 11:03:37.552368       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 11:03:37.552914       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 11:03:37.560449       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 11:03:37.570692       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 11:03:37.572316       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 11:03:37.584613       1 cache.go:39] Caches are synced for autoregister controller
	I0603 11:03:37.585388       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 11:03:37.585814       1 shared_informer.go:320] Caches are synced for configmaps
	E0603 11:03:37.625819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 11:03:38.370830       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 11:03:39.408064       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 11:03:39.446254       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 11:03:39.508431       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 11:03:39.553274       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 11:03:39.564560       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 11:03:50.704500       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 11:03:50.805369       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [acfdea631f68] <==
	
	
	==> kube-controller-manager [af1cd54ffbac] <==
	
	
	==> kube-controller-manager [bff16c297dc8] <==
	I0603 11:03:50.590018       1 shared_informer.go:320] Caches are synced for namespace
	I0603 11:03:50.592685       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 11:03:50.598586       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 11:03:50.600151       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 11:03:50.601511       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 11:03:50.603760       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 11:03:50.603835       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 11:03:50.603845       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 11:03:50.604085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 11:03:50.605812       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 11:03:50.610736       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 11:03:50.621578       1 shared_informer.go:320] Caches are synced for HPA
	I0603 11:03:50.624218       1 shared_informer.go:320] Caches are synced for job
	I0603 11:03:50.629730       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 11:03:50.672680       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 11:03:50.708383       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 11:03:50.769883       1 shared_informer.go:320] Caches are synced for deployment
	I0603 11:03:50.769955       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 11:03:50.799693       1 shared_informer.go:320] Caches are synced for disruption
	I0603 11:03:50.802108       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 11:03:50.815275       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 11:03:50.851846       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 11:03:51.249383       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 11:03:51.265852       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 11:03:51.266427       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [789de4a5df4c] <==
	I0603 11:03:39.230012       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:03:39.261577       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.94.139"]
	I0603 11:03:39.333227       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:03:39.333321       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:03:39.333342       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:03:39.338656       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:03:39.338805       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:03:39.338817       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:03:39.346781       1 config.go:192] "Starting service config controller"
	I0603 11:03:39.346794       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:03:39.346813       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:03:39.346817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:03:39.347334       1 config.go:319] "Starting node config controller"
	I0603 11:03:39.347341       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:03:39.448900       1 shared_informer.go:320] Caches are synced for node config
	I0603 11:03:39.448951       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:03:39.449042       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [78af648444af] <==
	
	
	==> kube-scheduler [97e715cadb33] <==
	I0603 11:03:35.242895       1 serving.go:380] Generated self-signed cert in-memory
	W0603 11:03:37.428663       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 11:03:37.428837       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:03:37.428950       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 11:03:37.429145       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 11:03:37.528492       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 11:03:37.531014       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:03:37.535992       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 11:03:37.538028       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:03:37.538993       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 11:03:37.539253       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 11:03:37.639067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ad2f48676914] <==
	
	
	==> kubelet <==
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.631204    5363 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.632160    5363 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.798362    5363 apiserver.go:52] "Watching apiserver"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.801946    5363 topology_manager.go:215] "Topology Admit Handler" podUID="8f729a75-fdf4-49a2-8fc6-d200958a5cba" podNamespace="kube-system" podName="coredns-7db6d8ff4d-89hqd"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.802251    5363 topology_manager.go:215] "Topology Admit Handler" podUID="331b5954-d9af-44df-9931-bd63f1440eaf" podNamespace="kube-system" podName="kube-proxy-t5fmv"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.802544    5363 topology_manager.go:215] "Topology Admit Handler" podUID="b33ccee1-44e1-4a45-b3bd-001b1944c26c" podNamespace="kube-system" podName="storage-provisioner"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.841279    5363 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: E0603 11:03:37.849469    5363 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-754300\" already exists" pod="kube-system/kube-apiserver-functional-754300"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: E0603 11:03:37.849576    5363 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-functional-754300\" already exists" pod="kube-system/kube-controller-manager-functional-754300"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.861834    5363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331b5954-d9af-44df-9931-bd63f1440eaf-lib-modules\") pod \"kube-proxy-t5fmv\" (UID: \"331b5954-d9af-44df-9931-bd63f1440eaf\") " pod="kube-system/kube-proxy-t5fmv"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.862060    5363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b33ccee1-44e1-4a45-b3bd-001b1944c26c-tmp\") pod \"storage-provisioner\" (UID: \"b33ccee1-44e1-4a45-b3bd-001b1944c26c\") " pod="kube-system/storage-provisioner"
	Jun 03 11:03:37 functional-754300 kubelet[5363]: I0603 11:03:37.862156    5363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331b5954-d9af-44df-9931-bd63f1440eaf-xtables-lock\") pod \"kube-proxy-t5fmv\" (UID: \"331b5954-d9af-44df-9931-bd63f1440eaf\") " pod="kube-system/kube-proxy-t5fmv"
	Jun 03 11:03:39 functional-754300 kubelet[5363]: I0603 11:03:39.021603    5363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bca56561d1dc7aaa5f6dfcd73dbc5b09871a42c48d89712eb9dc3d5a357a294"
	Jun 03 11:03:41 functional-754300 kubelet[5363]: I0603 11:03:41.251710    5363 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 03 11:03:45 functional-754300 kubelet[5363]: I0603 11:03:45.680689    5363 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 03 11:04:32 functional-754300 kubelet[5363]: E0603 11:04:32.951365    5363 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:04:32 functional-754300 kubelet[5363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:04:32 functional-754300 kubelet[5363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:04:32 functional-754300 kubelet[5363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:04:32 functional-754300 kubelet[5363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:05:32 functional-754300 kubelet[5363]: E0603 11:05:32.951013    5363 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:05:32 functional-754300 kubelet[5363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:05:32 functional-754300 kubelet[5363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:05:32 functional-754300 kubelet[5363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:05:32 functional-754300 kubelet[5363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [d761cba5c079] <==
	
	
	==> storage-provisioner [d8e4b3d4e7cb] <==
	I0603 11:03:38.985163       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 11:03:39.039292       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 11:03:39.039339       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 11:03:56.481570       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 11:03:56.482146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-754300_1a6504ed-f628-4a11-97f0-3af654943ad2!
	I0603 11:03:56.482916       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fae8fd3f-ebbd-40bb-b4c8-d6017a1f0267", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-754300_1a6504ed-f628-4a11-97f0-3af654943ad2 became leader
	I0603 11:03:56.582614       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-754300_1a6504ed-f628-4a11-97f0-3af654943ad2!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:05:33.465145    8772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-754300 -n functional-754300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-754300 -n functional-754300: (11.6914937s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-754300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (32.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-754300 config unset cpus" to be -""- but got *"W0603 04:08:39.128499    3316 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 config get cpus: exit status 14 (201.2319ms)

                                                
                                                
** stderr ** 
	W0603 04:08:39.353274    4628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-754300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0603 04:08:39.353274    4628 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-754300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0603 04:08:39.549842    9896 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-754300 config get cpus" to be -""- but got *"W0603 04:08:39.767255    8620 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-754300 config unset cpus" to be -""- but got *"W0603 04:08:39.962013    3928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 config get cpus: exit status 14 (154.2935ms)

                                                
                                                
** stderr ** 
	W0603 04:08:40.141423    6776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-754300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0603 04:08:40.141423    6776 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 service --namespace=default --https --url hello-node: exit status 1 (15.0210188s)

                                                
                                                
** stderr ** 
	W0603 04:09:28.406708   10848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-754300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 service hello-node --url --format={{.IP}}: exit status 1 (15.0244075s)

                                                
                                                
** stderr ** 
	W0603 04:09:43.475683    4896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-754300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 service hello-node --url: exit status 1 (15.0127739s)

                                                
                                                
** stderr ** 
	W0603 04:09:58.481451    8592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-754300 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (68.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.419669s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:29:42.916199   14776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-fc5497c4f-bz4xm): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-hd7gx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-hd7gx -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-hd7gx -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4427983s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:29:53.826148    5476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-fc5497c4f-hd7gx): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4275117s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:30:04.684104   15308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-fc5497c4f-np7rl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-528700 -n ha-528700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-528700 -n ha-528700: (12.489582s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 logs -n 25: (8.9731398s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-754300 ssh pgrep          | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:11 PDT |                     |
	|         | buildkitd                            |                   |                   |         |                     |                     |
	| image   | functional-754300 image build -t     | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:12 PDT | 03 Jun 24 04:12 PDT |
	|         | localhost/my-image:functional-754300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-754300 image ls           | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:12 PDT | 03 Jun 24 04:12 PDT |
	| delete  | -p functional-754300                 | functional-754300 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:16 PDT | 03 Jun 24 04:17 PDT |
	| start   | -p ha-528700 --wait=true             | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:17 PDT | 03 Jun 24 04:28 PDT |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- apply -f             | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- rollout status       | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- get pods -o          | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- get pods -o          | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-bz4xm --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-hd7gx --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-np7rl --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-bz4xm --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-hd7gx --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-np7rl --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-bz4xm -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-hd7gx -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-np7rl -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- get pods -o          | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-bz4xm              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT |                     |
	|         | busybox-fc5497c4f-bz4xm -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT | 03 Jun 24 04:29 PDT |
	|         | busybox-fc5497c4f-hd7gx              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:29 PDT |                     |
	|         | busybox-fc5497c4f-hd7gx -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:30 PDT | 03 Jun 24 04:30 PDT |
	|         | busybox-fc5497c4f-np7rl              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-528700 -- exec                 | ha-528700         | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:30 PDT |                     |
	|         | busybox-fc5497c4f-np7rl -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 04:17:34
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 04:17:34.279474    1052 out.go:291] Setting OutFile to fd 1144 ...
	I0603 04:17:34.280499    1052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:17:34.280499    1052 out.go:304] Setting ErrFile to fd 784...
	I0603 04:17:34.280499    1052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:17:34.308277    1052 out.go:298] Setting JSON to false
	I0603 04:17:34.311960    1052 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2682,"bootTime":1717410772,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 04:17:34.311960    1052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 04:17:34.318093    1052 out.go:177] * [ha-528700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 04:17:34.324284    1052 notify.go:220] Checking for updates...
	I0603 04:17:34.326128    1052 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:17:34.332141    1052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 04:17:34.335271    1052 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 04:17:34.337703    1052 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 04:17:34.343188    1052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 04:17:34.346027    1052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 04:17:39.641140    1052 out.go:177] * Using the hyperv driver based on user configuration
	I0603 04:17:39.645057    1052 start.go:297] selected driver: hyperv
	I0603 04:17:39.645057    1052 start.go:901] validating driver "hyperv" against <nil>
	I0603 04:17:39.645057    1052 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 04:17:39.692201    1052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 04:17:39.693219    1052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:17:39.693219    1052 cni.go:84] Creating CNI manager for ""
	I0603 04:17:39.693219    1052 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 04:17:39.693219    1052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 04:17:39.693752    1052 start.go:340] cluster config:
	{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:17:39.693752    1052 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 04:17:39.701225    1052 out.go:177] * Starting "ha-528700" primary control-plane node in "ha-528700" cluster
	I0603 04:17:39.703176    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:17:39.703176    1052 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 04:17:39.703176    1052 cache.go:56] Caching tarball of preloaded images
	I0603 04:17:39.704168    1052 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:17:39.704475    1052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:17:39.704761    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:17:39.705302    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json: {Name:mk56a0c30d28b92a4751ddb457875919745f5dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:17:39.705535    1052 start.go:360] acquireMachinesLock for ha-528700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:17:39.705535    1052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-528700"
	I0603 04:17:39.706850    1052 start.go:93] Provisioning new machine with config: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:17:39.706850    1052 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 04:17:39.711269    1052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 04:17:39.711540    1052 start.go:159] libmachine.API.Create for "ha-528700" (driver="hyperv")
	I0603 04:17:39.711638    1052 client.go:168] LocalClient.Create starting
	I0603 04:17:39.712344    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 04:17:39.712586    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:17:39.712633    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:17:39.712735    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 04:17:39.712735    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:17:39.712735    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:17:39.712735    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 04:17:41.754872    1052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 04:17:41.755561    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:41.755561    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 04:17:43.486306    1052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 04:17:43.486475    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:43.486852    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:17:44.911108    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:17:44.911475    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:44.911544    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:17:48.455979    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:17:48.456068    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:48.458437    1052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 04:17:48.956813    1052 main.go:141] libmachine: Creating SSH key...
	I0603 04:17:49.117768    1052 main.go:141] libmachine: Creating VM...
	I0603 04:17:49.117768    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:17:51.875543    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:17:51.875543    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:51.875543    1052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 04:17:51.876721    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:17:53.567279    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:17:53.567279    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:53.567279    1052 main.go:141] libmachine: Creating VHD
	I0603 04:17:53.568295    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 04:17:57.362817    1052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DA3D68A4-FBFF-4E35-82A3-2AFCCFA39303
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 04:17:57.362967    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:57.362967    1052 main.go:141] libmachine: Writing magic tar header
	I0603 04:17:57.363078    1052 main.go:141] libmachine: Writing SSH key tar header
	I0603 04:17:57.371881    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 04:18:00.559256    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:00.559256    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:00.559256    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\disk.vhd' -SizeBytes 20000MB
	I0603 04:18:03.073429    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:03.073629    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:03.073711    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-528700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 04:18:06.704518    1052 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-528700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 04:18:06.704518    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:06.705311    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-528700 -DynamicMemoryEnabled $false
	I0603 04:18:08.938623    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:08.938623    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:08.938850    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-528700 -Count 2
	I0603 04:18:11.092245    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:11.092245    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:11.092531    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-528700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\boot2docker.iso'
	I0603 04:18:13.706923    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:13.706923    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:13.706923    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-528700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\disk.vhd'
	I0603 04:18:16.334630    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:16.334630    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:16.334630    1052 main.go:141] libmachine: Starting VM...
	I0603 04:18:16.335439    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-528700
	I0603 04:18:19.454197    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:19.454341    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:19.454401    1052 main.go:141] libmachine: Waiting for host to start...
	I0603 04:18:19.454401    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:21.710823    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:21.710823    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:21.711756    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:24.221825    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:24.222240    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:25.223961    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:27.453594    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:27.453594    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:27.453594    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:30.083436    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:30.083436    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:31.085402    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:33.287679    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:33.287679    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:33.287679    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:35.770370    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:35.770532    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:36.772864    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:38.995492    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:38.996600    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:38.996600    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:41.588229    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:41.588229    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:42.596227    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:44.816798    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:44.817463    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:44.817463    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:47.383986    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:18:47.383986    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:47.384246    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:49.468121    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:49.468290    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:49.468290    1052 machine.go:94] provisionDockerMachine start ...
	I0603 04:18:49.468290    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:51.615213    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:51.615213    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:51.615495    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:54.172364    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:18:54.172632    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:54.178089    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:18:54.187911    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:18:54.187911    1052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:18:54.311846    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 04:18:54.311846    1052 buildroot.go:166] provisioning hostname "ha-528700"
	I0603 04:18:54.312533    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:56.473026    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:56.473026    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:56.473026    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:59.007476    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:18:59.008220    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:59.013463    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:18:59.014176    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:18:59.014176    1052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-528700 && echo "ha-528700" | sudo tee /etc/hostname
	I0603 04:18:59.172941    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-528700
	
	I0603 04:18:59.172941    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:01.276589    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:01.276589    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:01.276771    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:03.785750    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:03.785951    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:03.794590    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:03.794590    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:03.794590    1052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-528700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-528700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-528700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:19:03.933453    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:19:03.933628    1052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:19:03.933649    1052 buildroot.go:174] setting up certificates
	I0603 04:19:03.933709    1052 provision.go:84] configureAuth start
	I0603 04:19:03.933746    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:06.043155    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:06.043881    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:06.043952    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:08.597494    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:08.598009    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:08.598009    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:10.706029    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:10.706029    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:10.707016    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:13.223196    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:13.223958    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:13.223958    1052 provision.go:143] copyHostCerts
	I0603 04:19:13.224247    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:19:13.224247    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:19:13.224247    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:19:13.225106    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:19:13.226474    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:19:13.226758    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:19:13.226830    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:19:13.227242    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:19:13.228524    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:19:13.228524    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:19:13.228524    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:19:13.229290    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:19:13.230067    1052 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-528700 san=[127.0.0.1 172.17.88.175 ha-528700 localhost minikube]
	I0603 04:19:13.392366    1052 provision.go:177] copyRemoteCerts
	I0603 04:19:13.403353    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:19:13.403353    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:15.529787    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:15.530739    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:15.530771    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:18.068892    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:18.069979    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:18.069979    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:19:18.178496    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7750258s)
	I0603 04:19:18.178600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:19:18.178749    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:19:18.225301    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:19:18.225881    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 04:19:18.266316    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:19:18.266887    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:19:18.314022    1052 provision.go:87] duration metric: took 14.3802829s to configureAuth
	I0603 04:19:18.314022    1052 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:19:18.314022    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:19:18.314832    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:20.408679    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:20.408784    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:20.408784    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:22.943317    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:22.943317    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:22.948193    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:22.948889    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:22.948889    1052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:19:23.090330    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:19:23.090406    1052 buildroot.go:70] root file system type: tmpfs
	I0603 04:19:23.090670    1052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:19:23.090764    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:25.233000    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:25.233224    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:25.233224    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:27.771083    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:27.771083    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:27.777116    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:27.777116    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:27.777116    1052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:19:27.942412    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:19:27.942652    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:30.045101    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:30.045101    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:30.045192    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:32.572044    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:32.572927    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:32.577401    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:32.577620    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:32.577620    1052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:19:34.687072    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 04:19:34.687156    1052 machine.go:97] duration metric: took 45.2187707s to provisionDockerMachine
	I0603 04:19:34.687187    1052 client.go:171] duration metric: took 1m54.9752507s to LocalClient.Create
	I0603 04:19:34.687226    1052 start.go:167] duration metric: took 1m54.9754452s to libmachine.API.Create "ha-528700"
	I0603 04:19:34.687226    1052 start.go:293] postStartSetup for "ha-528700" (driver="hyperv")
	I0603 04:19:34.687276    1052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:19:34.701301    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:19:34.701301    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:36.796284    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:36.796653    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:36.796653    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:39.274846    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:39.275628    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:39.275628    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:19:39.379802    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6784917s)
	I0603 04:19:39.390396    1052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:19:39.397223    1052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:19:39.397308    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:19:39.397677    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:19:39.398241    1052 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:19:39.398241    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:19:39.410833    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 04:19:39.428484    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:19:39.474103    1052 start.go:296] duration metric: took 4.7868161s for postStartSetup
	I0603 04:19:39.476083    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:41.551797    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:41.551797    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:41.551797    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:44.066415    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:44.066415    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:44.066812    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:19:44.069713    1052 start.go:128] duration metric: took 2m4.3624882s to createHost
	I0603 04:19:44.069811    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:46.141675    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:46.141675    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:46.142131    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:48.785078    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:48.785285    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:48.790988    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:48.791178    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:48.791178    1052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:19:48.926441    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717413588.932657281
	
	I0603 04:19:48.926538    1052 fix.go:216] guest clock: 1717413588.932657281
	I0603 04:19:48.926538    1052 fix.go:229] Guest: 2024-06-03 04:19:48.932657281 -0700 PDT Remote: 2024-06-03 04:19:44.0697138 -0700 PDT m=+129.875455801 (delta=4.862943481s)
	I0603 04:19:48.926538    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:50.999890    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:50.999890    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:50.999890    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:53.469419    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:53.469601    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:53.475370    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:53.475520    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:53.475520    1052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717413588
	I0603 04:19:53.616830    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:19:48 UTC 2024
	
	I0603 04:19:53.616888    1052 fix.go:236] clock set: Mon Jun  3 11:19:48 UTC 2024
	 (err=<nil>)
	I0603 04:19:53.616888    1052 start.go:83] releasing machines lock for "ha-528700", held for 2m13.9100203s
	I0603 04:19:53.617233    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:55.697877    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:55.697877    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:55.698034    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:58.239049    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:58.239278    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:58.244542    1052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:19:58.244618    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:58.255849    1052 ssh_runner.go:195] Run: cat /version.json
	I0603 04:19:58.255849    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:00.443259    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:00.443294    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:00.443390    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:20:00.446608    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:00.446608    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:00.447138    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:20:03.045962    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:20:03.045962    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:03.046534    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:20:03.068618    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:20:03.069143    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:03.069346    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:20:03.147600    1052 ssh_runner.go:235] Completed: cat /version.json: (4.8917413s)
	I0603 04:20:03.159582    1052 ssh_runner.go:195] Run: systemctl --version
	I0603 04:20:03.227940    1052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9832665s)
	I0603 04:20:03.242351    1052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 04:20:03.251060    1052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:20:03.261700    1052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:20:03.288840    1052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 04:20:03.288840    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:20:03.289014    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:20:03.333188    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:20:03.364444    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:20:03.386309    1052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:20:03.396698    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:20:03.428257    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:20:03.461065    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:20:03.491085    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:20:03.521004    1052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:20:03.552125    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:20:03.581343    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:20:03.612969    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:20:03.643585    1052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:20:03.672804    1052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:20:03.700795    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:03.905976    1052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:20:03.936400    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:20:03.949218    1052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:20:03.985022    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:20:04.020448    1052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:20:04.071731    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:20:04.108009    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:20:04.143469    1052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 04:20:04.207257    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:20:04.234836    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:20:04.281169    1052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:20:04.296750    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:20:04.315070    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:20:04.355641    1052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:20:04.538873    1052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:20:04.731474    1052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:20:04.731528    1052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:20:04.775348    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:04.976155    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:20:07.481092    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5049321s)
	I0603 04:20:07.493884    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:20:07.528430    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:20:07.562450    1052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:20:07.744712    1052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:20:07.921187    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:08.111414    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:20:08.155596    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:20:08.188947    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:08.381839    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:20:08.495946    1052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:20:08.510245    1052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:20:08.520576    1052 start.go:562] Will wait 60s for crictl version
	I0603 04:20:08.533217    1052 ssh_runner.go:195] Run: which crictl
	I0603 04:20:08.550794    1052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:20:08.602284    1052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:20:08.610545    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:20:08.650495    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:20:08.683263    1052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:20:08.683780    1052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:20:08.691391    1052 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:20:08.691391    1052 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:20:08.702174    1052 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:20:08.708360    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:20:08.747538    1052 kubeadm.go:877] updating cluster {Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 04:20:08.747538    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:20:08.757528    1052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 04:20:08.784796    1052 docker.go:685] Got preloaded images: 
	I0603 04:20:08.784796    1052 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 04:20:08.795780    1052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 04:20:08.823763    1052 ssh_runner.go:195] Run: which lz4
	I0603 04:20:08.829463    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 04:20:08.841700    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 04:20:08.847855    1052 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 04:20:08.847855    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 04:20:10.761557    1052 docker.go:649] duration metric: took 1.9316157s to copy over tarball
	I0603 04:20:10.774161    1052 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 04:20:19.330577    1052 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5563985s)
	I0603 04:20:19.330577    1052 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 04:20:19.394097    1052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 04:20:19.415538    1052 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 04:20:19.459955    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:19.657766    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:20:22.621842    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9640699s)
	I0603 04:20:22.632574    1052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 04:20:22.657731    1052 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 04:20:22.657731    1052 cache_images.go:84] Images are preloaded, skipping loading
	I0603 04:20:22.657731    1052 kubeadm.go:928] updating node { 172.17.88.175 8443 v1.30.1 docker true true} ...
	I0603 04:20:22.657731    1052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-528700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.88.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:20:22.665577    1052 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 04:20:22.697453    1052 cni.go:84] Creating CNI manager for ""
	I0603 04:20:22.697453    1052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 04:20:22.697453    1052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 04:20:22.697453    1052 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.88.175 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-528700 NodeName:ha-528700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.88.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.88.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 04:20:22.697977    1052 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.88.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-528700"
	  kubeletExtraArgs:
	    node-ip: 172.17.88.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.88.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 04:20:22.698095    1052 kube-vip.go:115] generating kube-vip config ...
	I0603 04:20:22.709008    1052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 04:20:22.744297    1052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 04:20:22.744297    1052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 04:20:22.756009    1052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:20:22.771505    1052 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 04:20:22.784716    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 04:20:22.802485    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 04:20:22.831253    1052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:20:22.859871    1052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0603 04:20:22.889888    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0603 04:20:22.932230    1052 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0603 04:20:22.939359    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:20:22.971841    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:23.158580    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:20:23.188521    1052 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700 for IP: 172.17.88.175
	I0603 04:20:23.188521    1052 certs.go:194] generating shared ca certs ...
	I0603 04:20:23.188521    1052 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.189476    1052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:20:23.189476    1052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:20:23.190194    1052 certs.go:256] generating profile certs ...
	I0603 04:20:23.190984    1052 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key
	I0603 04:20:23.190984    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.crt with IP's: []
	I0603 04:20:23.270593    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.crt ...
	I0603 04:20:23.270593    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.crt: {Name:mk26f6668f30a24f17487b3468c5967d94a7b23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.272674    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key ...
	I0603 04:20:23.272674    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key: {Name:mk99d1965e4aa7cd3f8387d67207dbf318ee3dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.274634    1052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705
	I0603 04:20:23.274932    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.88.175 172.17.95.254]
	I0603 04:20:23.472931    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705 ...
	I0603 04:20:23.472931    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705: {Name:mke45570e1156208409a537001364befd204b3a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.474569    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705 ...
	I0603 04:20:23.474569    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705: {Name:mkb59daca4be328d47fbfa517734e651ff3daf7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.475342    1052 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt
	I0603 04:20:23.487805    1052 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key
	I0603 04:20:23.489612    1052 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key
	I0603 04:20:23.490207    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt with IP's: []
	I0603 04:20:23.773112    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt ...
	I0603 04:20:23.773112    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt: {Name:mk890eea760a932863e8b60d5a4125a5a0573734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.775051    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key ...
	I0603 04:20:23.775051    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key: {Name:mkf824e09a768b2cc3bd2d9fc3ba5d6dbdb038a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.776093    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:20:23.776828    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:20:23.776828    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:20:23.776828    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:20:23.777463    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:20:23.777708    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:20:23.777818    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:20:23.787672    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:20:23.788638    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:20:23.789004    1052 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:20:23.789269    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:20:23.789383    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:20:23.789811    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:20:23.789999    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:20:23.790651    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:20:23.790979    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:20:23.791120    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:20:23.791120    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:23.791773    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:20:23.842816    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:20:23.888945    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:20:23.948373    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:20:23.997908    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 04:20:24.067558    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 04:20:24.103484    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:20:24.140695    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:20:24.185804    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:20:24.228262    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:20:24.272802    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:20:24.318227    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 04:20:24.362026    1052 ssh_runner.go:195] Run: openssl version
	I0603 04:20:24.384389    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:20:24.418076    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:20:24.423990    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:20:24.436026    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:20:24.458151    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:20:24.491862    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:20:24.524181    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:20:24.531626    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:20:24.543318    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:20:24.562712    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:20:24.594792    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:20:24.627181    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:24.634276    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:24.645382    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:24.666552    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:20:24.695492    1052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:20:24.702220    1052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 04:20:24.702747    1052 kubeadm.go:391] StartCluster: {Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:20:24.712656    1052 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 04:20:24.746385    1052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 04:20:24.778605    1052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 04:20:24.807395    1052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 04:20:24.832555    1052 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 04:20:24.832555    1052 kubeadm.go:156] found existing configuration files:
	
	I0603 04:20:24.844251    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 04:20:24.869327    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 04:20:24.881456    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 04:20:24.913518    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 04:20:24.932561    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 04:20:24.946431    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 04:20:24.981090    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 04:20:25.003717    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 04:20:25.015576    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 04:20:25.046071    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 04:20:25.064594    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 04:20:25.076990    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 04:20:25.101745    1052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 04:20:25.593249    1052 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 04:20:40.843144    1052 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 04:20:40.843338    1052 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 04:20:40.843567    1052 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 04:20:40.843757    1052 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 04:20:40.844059    1052 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 04:20:40.844268    1052 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 04:20:40.851623    1052 out.go:204]   - Generating certificates and keys ...
	I0603 04:20:40.851623    1052 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 04:20:40.851623    1052 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 04:20:40.852308    1052 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 04:20:40.853098    1052 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-528700 localhost] and IPs [172.17.88.175 127.0.0.1 ::1]
	I0603 04:20:40.853098    1052 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 04:20:40.853098    1052 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-528700 localhost] and IPs [172.17.88.175 127.0.0.1 ::1]
	I0603 04:20:40.853731    1052 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 04:20:40.853731    1052 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 04:20:40.853731    1052 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 04:20:40.853731    1052 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 04:20:40.854345    1052 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 04:20:40.854523    1052 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 04:20:40.854653    1052 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 04:20:40.854744    1052 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 04:20:40.854947    1052 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 04:20:40.855170    1052 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 04:20:40.855389    1052 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 04:20:40.858034    1052 out.go:204]   - Booting up control plane ...
	I0603 04:20:40.858262    1052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 04:20:40.858450    1052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 04:20:40.858680    1052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 04:20:40.858877    1052 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 04:20:40.859035    1052 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 04:20:40.859172    1052 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 04:20:40.859172    1052 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 04:20:40.859172    1052 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 04:20:40.859731    1052 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002233583s
	I0603 04:20:40.859925    1052 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 04:20:40.860008    1052 kubeadm.go:309] [api-check] The API server is healthy after 8.793195013s
	I0603 04:20:40.860008    1052 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 04:20:40.860578    1052 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 04:20:40.860848    1052 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 04:20:40.861344    1052 kubeadm.go:309] [mark-control-plane] Marking the node ha-528700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 04:20:40.861459    1052 kubeadm.go:309] [bootstrap-token] Using token: 4zfnhz.pxe484xavk1amvz9
	I0603 04:20:40.864555    1052 out.go:204]   - Configuring RBAC rules ...
	I0603 04:20:40.864555    1052 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 04:20:40.865301    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 04:20:40.865721    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 04:20:40.865835    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 04:20:40.865835    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 04:20:40.865835    1052 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 04:20:40.866530    1052 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 04:20:40.866530    1052 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 04:20:40.866805    1052 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 04:20:40.866805    1052 kubeadm.go:309] 
	I0603 04:20:40.866805    1052 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 04:20:40.866805    1052 kubeadm.go:309] 
	I0603 04:20:40.866805    1052 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 04:20:40.866805    1052 kubeadm.go:309] 
	I0603 04:20:40.867390    1052 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 04:20:40.867566    1052 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 04:20:40.867566    1052 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 04:20:40.867566    1052 kubeadm.go:309] 
	I0603 04:20:40.867866    1052 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 04:20:40.867866    1052 kubeadm.go:309] 
	I0603 04:20:40.867986    1052 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 04:20:40.867986    1052 kubeadm.go:309] 
	I0603 04:20:40.868145    1052 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 04:20:40.868145    1052 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 04:20:40.868145    1052 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 04:20:40.868145    1052 kubeadm.go:309] 
	I0603 04:20:40.868145    1052 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 04:20:40.868813    1052 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 04:20:40.868813    1052 kubeadm.go:309] 
	I0603 04:20:40.868813    1052 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zfnhz.pxe484xavk1amvz9 \
	I0603 04:20:40.868813    1052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 \
	I0603 04:20:40.870411    1052 kubeadm.go:309] 	--control-plane 
	I0603 04:20:40.870442    1052 kubeadm.go:309] 
	I0603 04:20:40.870608    1052 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 04:20:40.870608    1052 kubeadm.go:309] 
	I0603 04:20:40.870646    1052 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zfnhz.pxe484xavk1amvz9 \
	I0603 04:20:40.871054    1052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 04:20:40.871207    1052 cni.go:84] Creating CNI manager for ""
	I0603 04:20:40.871234    1052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 04:20:40.874456    1052 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 04:20:40.888114    1052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 04:20:40.896789    1052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 04:20:40.896789    1052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 04:20:40.945803    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 04:20:41.548297    1052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 04:20:41.562830    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:41.562830    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-528700 minikube.k8s.io/updated_at=2024_06_03T04_20_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-528700 minikube.k8s.io/primary=true
	I0603 04:20:41.575916    1052 ops.go:34] apiserver oom_adj: -16
	I0603 04:20:41.760200    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:42.264027    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:42.764087    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:43.265886    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:43.765929    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:44.267711    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:44.769589    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:45.274121    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:45.764624    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:46.266962    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:46.769697    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:47.262470    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:47.760475    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:48.263396    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:48.764931    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:49.271031    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:49.760310    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:50.263598    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:50.772868    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:51.260213    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:51.774569    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:52.274128    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:52.765484    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:53.271527    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:53.413092    1052 kubeadm.go:1107] duration metric: took 11.8647703s to wait for elevateKubeSystemPrivileges
	W0603 04:20:53.413211    1052 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 04:20:53.413317    1052 kubeadm.go:393] duration metric: took 28.710404s to StartCluster
	I0603 04:20:53.413317    1052 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:53.413552    1052 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:20:53.415362    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:53.416675    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 04:20:53.416675    1052 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:20:53.416779    1052 start.go:240] waiting for startup goroutines ...
	I0603 04:20:53.416779    1052 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 04:20:53.416938    1052 addons.go:69] Setting storage-provisioner=true in profile "ha-528700"
	I0603 04:20:53.416938    1052 addons.go:69] Setting default-storageclass=true in profile "ha-528700"
	I0603 04:20:53.416998    1052 addons.go:234] Setting addon storage-provisioner=true in "ha-528700"
	I0603 04:20:53.417037    1052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-528700"
	I0603 04:20:53.417120    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:20:53.417237    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:20:53.417856    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:53.418365    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:53.608903    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 04:20:53.974253    1052 start.go:946] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0603 04:20:55.744047    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:55.744228    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:55.744228    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:55.744228    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:55.747986    1052 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 04:20:55.745050    1052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:20:55.750154    1052 kapi.go:59] client config for ha-528700: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 04:20:55.750938    1052 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 04:20:55.750938    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 04:20:55.751102    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:55.752207    1052 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 04:20:55.752207    1052 addons.go:234] Setting addon default-storageclass=true in "ha-528700"
	I0603 04:20:55.752737    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:20:55.753915    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:58.094478    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:58.094535    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:58.094567    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:20:58.244902    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:58.245739    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:58.245816    1052 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 04:20:58.245816    1052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 04:20:58.245816    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:21:00.542797    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:00.542797    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:00.543867    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:21:00.895492    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:21:00.895492    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:00.895492    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:21:01.031099    1052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 04:21:03.312217    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:21:03.312388    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:03.312615    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:21:03.458838    1052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 04:21:03.650430    1052 round_trippers.go:463] GET https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 04:21:03.650430    1052 round_trippers.go:469] Request Headers:
	I0603 04:21:03.650430    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:21:03.650430    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:21:03.664725    1052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 04:21:03.665936    1052 round_trippers.go:463] PUT https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 04:21:03.665936    1052 round_trippers.go:469] Request Headers:
	I0603 04:21:03.665936    1052 round_trippers.go:473]     Content-Type: application/json
	I0603 04:21:03.665936    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:21:03.665936    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:21:03.668565    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:21:03.672613    1052 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 04:21:03.676602    1052 addons.go:510] duration metric: took 10.2598013s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 04:21:03.676602    1052 start.go:245] waiting for cluster config update ...
	I0603 04:21:03.676602    1052 start.go:254] writing updated cluster config ...
	I0603 04:21:03.679565    1052 out.go:177] 
	I0603 04:21:03.691600    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:21:03.691600    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:21:03.698575    1052 out.go:177] * Starting "ha-528700-m02" control-plane node in "ha-528700" cluster
	I0603 04:21:03.700610    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:21:03.700610    1052 cache.go:56] Caching tarball of preloaded images
	I0603 04:21:03.701571    1052 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:21:03.701571    1052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:21:03.701571    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:21:03.704568    1052 start.go:360] acquireMachinesLock for ha-528700-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:21:03.704568    1052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-528700-m02"
	I0603 04:21:03.704568    1052 start.go:93] Provisioning new machine with config: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:21:03.704568    1052 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0603 04:21:03.708561    1052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 04:21:03.708561    1052 start.go:159] libmachine.API.Create for "ha-528700" (driver="hyperv")
	I0603 04:21:03.708561    1052 client.go:168] LocalClient.Create starting
	I0603 04:21:03.708561    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 04:21:05.722266    1052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 04:21:05.722710    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:05.722710    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 04:21:07.503929    1052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 04:21:07.504620    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:07.504620    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:21:08.996575    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:21:08.996575    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:08.996575    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:21:12.764588    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:21:12.764886    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:12.768485    1052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 04:21:13.276095    1052 main.go:141] libmachine: Creating SSH key...
	I0603 04:21:13.449041    1052 main.go:141] libmachine: Creating VM...
	I0603 04:21:13.449041    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:21:16.397677    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:21:16.397677    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:16.398553    1052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 04:21:16.398738    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:21:18.183564    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:21:18.183701    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:18.183701    1052 main.go:141] libmachine: Creating VHD
	I0603 04:21:18.183701    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 04:21:22.009862    1052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1162F8CB-005F-460A-BFAA-B3F8A25F2E8A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 04:21:22.010726    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:22.010726    1052 main.go:141] libmachine: Writing magic tar header
	I0603 04:21:22.010726    1052 main.go:141] libmachine: Writing SSH key tar header
	I0603 04:21:22.020863    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 04:21:25.208511    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:25.208896    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:25.208950    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\disk.vhd' -SizeBytes 20000MB
	I0603 04:21:27.803631    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:27.804228    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:27.804228    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-528700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 04:21:31.450856    1052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-528700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 04:21:31.450856    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:31.451417    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-528700-m02 -DynamicMemoryEnabled $false
	I0603 04:21:33.696630    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:33.696630    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:33.697631    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-528700-m02 -Count 2
	I0603 04:21:35.876949    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:35.878150    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:35.878150    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-528700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\boot2docker.iso'
	I0603 04:21:38.473611    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:38.474555    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:38.474817    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-528700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\disk.vhd'
	I0603 04:21:41.148156    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:41.148533    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:41.148533    1052 main.go:141] libmachine: Starting VM...
	I0603 04:21:41.148533    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-528700-m02
	I0603 04:21:44.245171    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:44.245318    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:44.245318    1052 main.go:141] libmachine: Waiting for host to start...
	I0603 04:21:44.245318    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:21:46.569829    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:46.570516    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:46.570516    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:21:49.117156    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:49.117156    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:50.129203    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:21:52.370523    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:52.371347    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:52.371347    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:21:54.941455    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:54.941455    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:55.954188    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:21:58.285621    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:58.285621    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:58.285621    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:00.822515    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:22:00.822515    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:01.831514    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:04.082153    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:04.083051    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:04.083149    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:06.655011    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:22:06.655011    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:07.669479    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:09.931311    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:09.932201    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:09.932201    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:12.538757    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:12.538757    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:12.539127    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:14.713008    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:14.713008    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:14.713151    1052 machine.go:94] provisionDockerMachine start ...
	I0603 04:22:14.713215    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:16.917779    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:16.917779    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:16.917779    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:19.509033    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:19.509407    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:19.515272    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:19.526105    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:19.526105    1052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:22:19.656578    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 04:22:19.656690    1052 buildroot.go:166] provisioning hostname "ha-528700-m02"
	I0603 04:22:19.656690    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:21.759586    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:21.760296    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:21.760296    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:24.319535    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:24.319535    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:24.324108    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:24.325113    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:24.325113    1052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-528700-m02 && echo "ha-528700-m02" | sudo tee /etc/hostname
	I0603 04:22:24.484271    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-528700-m02
	
	I0603 04:22:24.484271    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:26.652414    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:26.652414    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:26.652414    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:29.183689    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:29.183689    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:29.190393    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:29.190393    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:29.190920    1052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-528700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-528700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-528700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:22:29.340169    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:22:29.340169    1052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:22:29.340169    1052 buildroot.go:174] setting up certificates
	I0603 04:22:29.340169    1052 provision.go:84] configureAuth start
	I0603 04:22:29.340169    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:31.458611    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:31.458611    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:31.459708    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:34.031745    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:34.032233    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:34.032284    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:36.179903    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:36.179903    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:36.179903    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:38.700067    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:38.700067    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:38.700067    1052 provision.go:143] copyHostCerts
	I0603 04:22:38.700714    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:22:38.700766    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:22:38.700766    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:22:38.701513    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:22:38.702287    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:22:38.702932    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:22:38.702932    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:22:38.702932    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:22:38.704493    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:22:38.705036    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:22:38.705138    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:22:38.705355    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:22:38.706197    1052 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-528700-m02 san=[127.0.0.1 172.17.84.187 ha-528700-m02 localhost minikube]
	I0603 04:22:38.829534    1052 provision.go:177] copyRemoteCerts
	I0603 04:22:38.843505    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:22:38.843505    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:40.994944    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:40.994944    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:40.994944    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:43.575641    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:43.575641    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:43.575641    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:22:43.682390    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8388743s)
	I0603 04:22:43.682390    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:22:43.683420    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:22:43.733638    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:22:43.733638    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 04:22:43.783124    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:22:43.783436    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:22:43.829357    1052 provision.go:87] duration metric: took 14.489156s to configureAuth
	I0603 04:22:43.829357    1052 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:22:43.830175    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:22:43.830384    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:45.950821    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:45.950821    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:45.950923    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:48.506933    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:48.506933    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:48.516645    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:48.516645    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:48.516645    1052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:22:48.650635    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:22:48.650635    1052 buildroot.go:70] root file system type: tmpfs
	I0603 04:22:48.650635    1052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:22:48.650635    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:50.906336    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:50.906336    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:50.907076    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:53.547647    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:53.547647    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:53.553609    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:53.554186    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:53.554186    1052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.88.175"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:22:53.709095    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.88.175
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:22:53.709177    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:55.834332    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:55.834332    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:55.834332    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:58.416589    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:58.416716    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:58.421156    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:58.421822    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:58.421898    1052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:23:00.536633    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 04:23:00.536736    1052 machine.go:97] duration metric: took 45.8234333s to provisionDockerMachine
	I0603 04:23:00.536736    1052 client.go:171] duration metric: took 1m56.82792s to LocalClient.Create
	I0603 04:23:00.536785    1052 start.go:167] duration metric: took 1m56.82792s to libmachine.API.Create "ha-528700"
	I0603 04:23:00.536785    1052 start.go:293] postStartSetup for "ha-528700-m02" (driver="hyperv")
	I0603 04:23:00.536785    1052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:23:00.549647    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:23:00.549647    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:02.688564    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:02.688786    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:02.688786    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:05.242778    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:05.243758    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:05.243878    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:23:05.351258    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8015124s)
	I0603 04:23:05.363866    1052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:23:05.371523    1052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:23:05.371670    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:23:05.372104    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:23:05.373199    1052 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:23:05.373277    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:23:05.385605    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 04:23:05.405236    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:23:05.458059    1052 start.go:296] duration metric: took 4.9212631s for postStartSetup
	I0603 04:23:05.460752    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:07.662155    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:07.662155    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:07.662239    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:10.248343    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:10.248638    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:10.248856    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:23:10.251285    1052 start.go:128] duration metric: took 2m6.5452242s to createHost
	I0603 04:23:10.251285    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:12.432213    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:12.432213    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:12.432478    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:15.006943    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:15.007135    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:15.012460    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:23:15.012988    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:23:15.012988    1052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:23:15.156552    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717413795.158364662
	
	I0603 04:23:15.156552    1052 fix.go:216] guest clock: 1717413795.158364662
	I0603 04:23:15.156552    1052 fix.go:229] Guest: 2024-06-03 04:23:15.158364662 -0700 PDT Remote: 2024-06-03 04:23:10.2512854 -0700 PDT m=+336.056584301 (delta=4.907079262s)
	I0603 04:23:15.156685    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:17.333275    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:17.333703    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:17.333703    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:19.867341    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:19.867341    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:19.873377    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:23:19.873924    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:23:19.873991    1052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717413795
	I0603 04:23:20.016547    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:23:15 UTC 2024
	
	I0603 04:23:20.016547    1052 fix.go:236] clock set: Mon Jun  3 11:23:15 UTC 2024
	 (err=<nil>)
	I0603 04:23:20.016547    1052 start.go:83] releasing machines lock for "ha-528700-m02", held for 2m16.3116814s
	I0603 04:23:20.016547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:22.199602    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:22.199602    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:22.199602    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:24.788585    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:24.788585    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:24.791728    1052 out.go:177] * Found network options:
	I0603 04:23:24.795154    1052 out.go:177]   - NO_PROXY=172.17.88.175
	W0603 04:23:24.797580    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:23:24.799220    1052 out.go:177]   - NO_PROXY=172.17.88.175
	W0603 04:23:24.801828    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:23:24.803582    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:23:24.805999    1052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:23:24.805999    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:24.815037    1052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 04:23:24.815037    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:27.038996    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:27.038996    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:27.039082    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:27.071434    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:27.071936    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:27.072005    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:29.712378    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:29.712378    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:29.712709    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:23:29.740126    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:29.740126    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:29.740126    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:23:29.814344    1052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9992962s)
	W0603 04:23:29.814344    1052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:23:29.827181    1052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:23:29.905751    1052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 04:23:29.905751    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:23:29.905751    1052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0997407s)
	I0603 04:23:29.905751    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:23:29.956726    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:23:29.988815    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:23:30.013885    1052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:23:30.026153    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:23:30.060446    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:23:30.092896    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:23:30.126480    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:23:30.158496    1052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:23:30.190313    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:23:30.224287    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:23:30.257590    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:23:30.289268    1052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:23:30.319205    1052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:23:30.350788    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:30.539554    1052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:23:30.571926    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:23:30.583707    1052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:23:30.621024    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:23:30.653504    1052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:23:30.696536    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:23:30.733899    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:23:30.772146    1052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 04:23:30.833773    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:23:30.862091    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:23:30.908631    1052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:23:30.928820    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:23:30.948161    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:23:30.994484    1052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:23:31.190604    1052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:23:31.375884    1052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:23:31.375884    1052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:23:31.423000    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:31.619370    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:23:34.132804    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5133434s)
	I0603 04:23:34.144327    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:23:34.179600    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:23:34.213277    1052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:23:34.407633    1052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:23:34.612074    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:34.801650    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:23:34.840818    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:23:34.876154    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:35.063807    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:23:35.164501    1052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:23:35.176848    1052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:23:35.188170    1052 start.go:562] Will wait 60s for crictl version
	I0603 04:23:35.199333    1052 ssh_runner.go:195] Run: which crictl
	I0603 04:23:35.221406    1052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:23:35.278813    1052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:23:35.288496    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:23:35.330584    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:23:35.371338    1052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:23:35.374913    1052 out.go:177]   - env NO_PROXY=172.17.88.175
	I0603 04:23:35.378507    1052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:23:35.384440    1052 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:23:35.384440    1052 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:23:35.398131    1052 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:23:35.402833    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:23:35.424663    1052 mustload.go:65] Loading cluster: ha-528700
	I0603 04:23:35.425417    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:23:35.425625    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:23:37.546041    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:37.546154    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:37.546154    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:23:37.546897    1052 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700 for IP: 172.17.84.187
	I0603 04:23:37.546970    1052 certs.go:194] generating shared ca certs ...
	I0603 04:23:37.546970    1052 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:23:37.547582    1052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:23:37.547985    1052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:23:37.548172    1052 certs.go:256] generating profile certs ...
	I0603 04:23:37.548865    1052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key
	I0603 04:23:37.548987    1052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff
	I0603 04:23:37.549130    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.88.175 172.17.84.187 172.17.95.254]
	I0603 04:23:37.753770    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff ...
	I0603 04:23:37.753770    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff: {Name:mk7956f77c939d9937df83e7fa7d3795b88314ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:23:37.755436    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff ...
	I0603 04:23:37.755436    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff: {Name:mk1c2e06615cac10354428838aeefade4c6ae3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:23:37.756609    1052 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt
	I0603 04:23:37.770630    1052 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key
	I0603 04:23:37.772249    1052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key
	I0603 04:23:37.772313    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:23:37.772550    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:23:37.773307    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:23:37.773462    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:23:37.774023    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:23:37.774023    1052 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:23:37.774023    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:23:37.774739    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:23:37.775048    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:23:37.775312    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:23:37.775312    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:23:37.775849    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:23:37.775994    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:37.776193    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:23:37.776404    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:23:39.926458    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:39.926746    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:39.926746    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:42.561361    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:23:42.561541    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:42.561541    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:23:42.656467    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 04:23:42.664385    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 04:23:42.695582    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 04:23:42.702073    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 04:23:42.733607    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 04:23:42.740981    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 04:23:42.773024    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 04:23:42.780044    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 04:23:42.810423    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 04:23:42.818076    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 04:23:42.847598    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 04:23:42.853543    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 04:23:42.873995    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:23:42.922924    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:23:42.973724    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:23:43.036412    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:23:43.083611    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 04:23:43.128079    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 04:23:43.170362    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:23:43.221562    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:23:43.267447    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:23:43.314607    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:23:43.362209    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:23:43.407048    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 04:23:43.437230    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 04:23:43.466682    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 04:23:43.497357    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 04:23:43.533208    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 04:23:43.570076    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 04:23:43.601918    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 04:23:43.646926    1052 ssh_runner.go:195] Run: openssl version
	I0603 04:23:43.664982    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:23:43.694410    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:23:43.701338    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:23:43.712144    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:23:43.731615    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:23:43.762929    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:23:43.794574    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:43.800379    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:43.811710    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:43.832694    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:23:43.862751    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:23:43.897551    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:23:43.904553    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:23:43.916107    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:23:43.936655    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:23:43.968390    1052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:23:43.974986    1052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 04:23:43.975292    1052 kubeadm.go:928] updating node {m02 172.17.84.187 8443 v1.30.1 docker true true} ...
	I0603 04:23:43.975407    1052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-528700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.84.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:23:43.975544    1052 kube-vip.go:115] generating kube-vip config ...
	I0603 04:23:43.987344    1052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 04:23:44.015963    1052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 04:23:44.016061    1052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 04:23:44.028838    1052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:23:44.042814    1052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 04:23:44.056793    1052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 04:23:44.079089    1052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0603 04:23:44.079229    1052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0603 04:23:44.079229    1052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0603 04:23:45.069699    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:23:45.080414    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:23:45.091145    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 04:23:45.091145    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 04:23:46.470744    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:23:46.482388    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:23:46.490349    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 04:23:46.490349    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 04:23:48.058566    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:23:48.082889    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:23:48.095834    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:23:48.101881    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 04:23:48.102041    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 04:23:48.816308    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 04:23:48.834421    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 04:23:48.865713    1052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:23:48.899076    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0603 04:23:48.948293    1052 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0603 04:23:48.955053    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:23:48.991542    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:49.213854    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:23:49.242043    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:23:49.243172    1052 start.go:316] joinCluster: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:23:49.243172    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 04:23:49.243172    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:23:51.390078    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:51.390078    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:51.390647    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:53.955907    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:23:53.956183    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:53.956354    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:23:54.156917    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9137341s)
	I0603 04:23:54.156917    1052 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:23:54.156917    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token miu2l8.dnnfyajibxax5wet --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m02 --control-plane --apiserver-advertise-address=172.17.84.187 --apiserver-bind-port=8443"
	I0603 04:24:37.482415    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token miu2l8.dnnfyajibxax5wet --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m02 --control-plane --apiserver-advertise-address=172.17.84.187 --apiserver-bind-port=8443": (43.3254022s)
	I0603 04:24:37.482630    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 04:24:38.401424    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-528700-m02 minikube.k8s.io/updated_at=2024_06_03T04_24_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-528700 minikube.k8s.io/primary=false
	I0603 04:24:38.609334    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-528700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 04:24:38.777326    1052 start.go:318] duration metric: took 49.5340442s to joinCluster
	I0603 04:24:38.777440    1052 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:24:38.780243    1052 out.go:177] * Verifying Kubernetes components...
	I0603 04:24:38.777669    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:24:38.795995    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:24:39.169463    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:24:39.203436    1052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:24:39.204433    1052 kapi.go:59] client config for ha-528700: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 04:24:39.204433    1052 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.88.175:8443
	I0603 04:24:39.205457    1052 node_ready.go:35] waiting up to 6m0s for node "ha-528700-m02" to be "Ready" ...
	I0603 04:24:39.205457    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:39.205457    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:39.205457    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:39.205457    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:39.223584    1052 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0603 04:24:39.712098    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:39.712159    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:39.712159    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:39.712159    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:39.894767    1052 round_trippers.go:574] Response Status: 200 OK in 182 milliseconds
	I0603 04:24:40.218307    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:40.218366    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:40.218366    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:40.218366    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:40.243779    1052 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0603 04:24:40.712507    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:40.712507    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:40.712567    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:40.712567    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:40.719565    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:41.206348    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:41.206559    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:41.206559    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:41.206559    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:41.212401    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:41.213841    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:41.712430    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:41.712527    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:41.712621    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:41.712621    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:41.718764    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:42.219688    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:42.219779    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:42.219779    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:42.219779    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:42.296032    1052 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I0603 04:24:42.710488    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:42.710545    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:42.710545    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:42.710545    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:42.715772    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:43.211379    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:43.211548    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:43.211548    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:43.211548    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:43.217231    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:43.217533    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:43.706729    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:43.706791    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:43.706858    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:43.706858    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:43.739456    1052 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0603 04:24:44.212149    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:44.212349    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:44.212349    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:44.212349    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:44.216933    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:44.719741    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:44.720017    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:44.720017    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:44.720105    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:44.729354    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:45.211264    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:45.211462    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:45.211462    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:45.211462    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:45.218568    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:45.219816    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:45.719803    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:45.720112    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:45.720112    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:45.720112    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:45.724843    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:46.210192    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:46.210192    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:46.210192    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:46.210192    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:46.216314    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:46.718524    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:46.718524    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:46.718591    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:46.718591    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:46.724983    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:47.207739    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:47.207898    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:47.207898    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:47.207953    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:47.212291    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:47.713852    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:47.713852    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:47.713967    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:47.713967    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:47.852991    1052 round_trippers.go:574] Response Status: 200 OK in 139 milliseconds
	I0603 04:24:47.853972    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:48.210290    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:48.210558    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:48.210558    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:48.210558    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:48.255749    1052 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0603 04:24:48.714975    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:48.715050    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:48.715050    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:48.715050    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:48.720401    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:49.219838    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:49.219903    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:49.219903    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:49.219903    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:49.225087    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:49.709076    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:49.709076    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:49.709076    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:49.709387    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:49.713855    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.210703    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:50.210703    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.210778    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.210778    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.217052    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:50.218098    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:50.711613    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:50.711745    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.711745    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.711745    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.716075    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.717742    1052 node_ready.go:49] node "ha-528700-m02" has status "Ready":"True"
	I0603 04:24:50.717841    1052 node_ready.go:38] duration metric: took 11.5122594s for node "ha-528700-m02" to be "Ready" ...
	I0603 04:24:50.717841    1052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:24:50.717970    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:50.717970    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.717970    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.717970    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.725710    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:50.735090    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.735090    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f6tv8
	I0603 04:24:50.735090    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.735090    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.735090    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.739834    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.740525    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:50.740525    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.740525    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.740525    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.744847    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.745203    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:50.745890    1052 pod_ready.go:81] duration metric: took 10.7999ms for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.745890    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.745890    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qwkq9
	I0603 04:24:50.745890    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.746040    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.746040    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.748979    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:24:50.750212    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:50.750212    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.750212    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.750270    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.753063    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:24:50.753929    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:50.753929    1052 pod_ready.go:81] duration metric: took 8.0385ms for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.753929    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.753929    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700
	I0603 04:24:50.753929    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.753929    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.753929    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.759125    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:50.759125    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:50.759125    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.759125    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.759125    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.764165    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:50.764312    1052 pod_ready.go:92] pod "etcd-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:50.764312    1052 pod_ready.go:81] duration metric: took 10.3831ms for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.764312    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.764938    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:50.764938    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.764938    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.764938    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.769017    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.769622    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:50.769622    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.769622    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.769622    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.773194    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:51.271082    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:51.271082    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.271082    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.271082    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.276564    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:51.278160    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:51.278816    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.278816    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.279031    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.288074    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:51.770059    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:51.770289    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.770289    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.770289    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.775665    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:51.776601    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:51.776663    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.776663    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.776663    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.781326    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:52.276725    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:52.276725    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.276725    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.276725    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.281415    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:52.283136    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:52.283136    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.283206    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.283206    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.287252    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:52.777560    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:52.777560    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.777647    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.777647    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.783175    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:52.784827    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:52.784827    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.784827    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.784827    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.789435    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:52.790185    1052 pod_ready.go:102] pod "etcd-ha-528700-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 04:24:53.276153    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:53.276153    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.276153    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.276153    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.281836    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:53.282998    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:53.282998    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.282998    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.282998    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.287601    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:53.777593    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:53.777843    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.777843    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.777843    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.782899    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:53.784561    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:53.784561    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.784561    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.784561    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.787944    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:54.265258    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:54.265258    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.265258    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.265258    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.270039    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:54.271747    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:54.271747    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.271747    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.271850    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.276122    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:54.769547    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:54.769547    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.769547    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.769547    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.777168    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:54.777985    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:54.777985    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.777985    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.777985    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.782170    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:55.279065    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:55.279065    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.279135    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.279135    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.286649    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:55.287564    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.287564    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.287564    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.287564    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.292299    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:55.293483    1052 pod_ready.go:92] pod "etcd-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.293483    1052 pod_ready.go:81] duration metric: took 4.5291613s for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.293483    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.293483    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700
	I0603 04:24:55.293483    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.293483    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.293483    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.298154    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:55.299819    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:55.299849    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.299849    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.299912    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.303136    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:55.305004    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.305123    1052 pod_ready.go:81] duration metric: took 11.6396ms for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.305123    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.305250    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m02
	I0603 04:24:55.305250    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.305332    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.305332    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.309098    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:55.310127    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.310227    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.310227    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.310227    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.312906    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:24:55.312906    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.312906    1052 pod_ready.go:81] duration metric: took 7.7826ms for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.312906    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.312906    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700
	I0603 04:24:55.312906    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.312906    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.312906    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.322012    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:55.512106    1052 request.go:629] Waited for 188.3559ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:55.512375    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:55.512375    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.512375    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.512375    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.518168    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:55.519461    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.519573    1052 pod_ready.go:81] duration metric: took 206.6666ms for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.519573    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.714406    1052 request.go:629] Waited for 194.7003ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:24:55.714406    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:24:55.714406    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.714406    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.714406    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.719741    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:55.917254    1052 request.go:629] Waited for 195.8712ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.917254    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.917633    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.917984    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.918481    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.928022    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:55.928022    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.928970    1052 pod_ready.go:81] duration metric: took 409.3967ms for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.929023    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.119396    1052 request.go:629] Waited for 189.9588ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:24:56.119516    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:24:56.119516    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.119516    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.119516    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.125989    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:56.322382    1052 request.go:629] Waited for 194.6562ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:56.322584    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:56.322615    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.322615    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.322615    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.327126    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:56.328103    1052 pod_ready.go:92] pod "kube-proxy-dbr56" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:56.328103    1052 pod_ready.go:81] duration metric: took 399.0796ms for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.328103    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.525428    1052 request.go:629] Waited for 196.9841ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:24:56.525428    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:24:56.525678    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.525678    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.525678    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.532173    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:56.712729    1052 request.go:629] Waited for 179.216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:56.712927    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:56.712983    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.712983    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.712983    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.721677    1052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 04:24:56.722675    1052 pod_ready.go:92] pod "kube-proxy-wlzrp" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:56.722675    1052 pod_ready.go:81] duration metric: took 394.5709ms for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.722675    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.913145    1052 request.go:629] Waited for 190.2603ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:24:56.913326    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:24:56.913326    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.913326    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.913326    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.919034    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.114427    1052 request.go:629] Waited for 194.0331ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:57.114622    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:57.114689    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.114689    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.114689    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.120271    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.121233    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:57.121233    1052 pod_ready.go:81] duration metric: took 398.5576ms for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:57.121335    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:57.315971    1052 request.go:629] Waited for 194.5616ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:24:57.315971    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:24:57.316272    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.316323    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.316323    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.321078    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:57.518570    1052 request.go:629] Waited for 196.3351ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:57.518942    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:57.518942    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.518942    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.518942    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.524963    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.525494    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:57.525494    1052 pod_ready.go:81] duration metric: took 404.1578ms for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:57.525494    1052 pod_ready.go:38] duration metric: took 6.8075086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:24:57.525741    1052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 04:24:57.539145    1052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:24:57.565553    1052 api_server.go:72] duration metric: took 18.7878977s to wait for apiserver process to appear ...
	I0603 04:24:57.565553    1052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 04:24:57.565553    1052 api_server.go:253] Checking apiserver healthz at https://172.17.88.175:8443/healthz ...
	I0603 04:24:57.575087    1052 api_server.go:279] https://172.17.88.175:8443/healthz returned 200:
	ok
	I0603 04:24:57.575660    1052 round_trippers.go:463] GET https://172.17.88.175:8443/version
	I0603 04:24:57.575660    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.575660    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.575660    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.576920    1052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:24:57.576920    1052 api_server.go:141] control plane version: v1.30.1
	I0603 04:24:57.576920    1052 api_server.go:131] duration metric: took 11.3668ms to wait for apiserver health ...
	I0603 04:24:57.576920    1052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 04:24:57.721481    1052 request.go:629] Waited for 144.3732ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:57.721573    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:57.721573    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.721573    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.721637    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.731365    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:57.739206    1052 system_pods.go:59] 17 kube-system pods found
	I0603 04:24:57.739245    1052 system_pods.go:61] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:24:57.739391    1052 system_pods.go:74] duration metric: took 162.4709ms to wait for pod list to return data ...
	I0603 04:24:57.739391    1052 default_sa.go:34] waiting for default service account to be created ...
	I0603 04:24:57.924223    1052 request.go:629] Waited for 184.4915ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:24:57.924223    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:24:57.924223    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.924223    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.924223    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.929970    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.930845    1052 default_sa.go:45] found service account: "default"
	I0603 04:24:57.930845    1052 default_sa.go:55] duration metric: took 191.4531ms for default service account to be created ...
	I0603 04:24:57.930845    1052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 04:24:58.125097    1052 request.go:629] Waited for 194.2511ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:58.125300    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:58.125300    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:58.125300    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:58.125371    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:58.135992    1052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 04:24:58.145094    1052 system_pods.go:86] 17 kube-system pods found
	I0603 04:24:58.145094    1052 system_pods.go:89] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:24:58.145094    1052 system_pods.go:126] duration metric: took 214.2483ms to wait for k8s-apps to be running ...
	I0603 04:24:58.145094    1052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 04:24:58.154870    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:24:58.191970    1052 system_svc.go:56] duration metric: took 46.8764ms WaitForService to wait for kubelet
	I0603 04:24:58.191970    1052 kubeadm.go:576] duration metric: took 19.4143132s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:24:58.191970    1052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 04:24:58.316976    1052 request.go:629] Waited for 124.8185ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes
	I0603 04:24:58.317205    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes
	I0603 04:24:58.317205    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:58.317205    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:58.317205    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:58.325544    1052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 04:24:58.327574    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:24:58.327734    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:24:58.327734    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:24:58.327734    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:24:58.327734    1052 node_conditions.go:105] duration metric: took 135.7632ms to run NodePressure ...
	I0603 04:24:58.327734    1052 start.go:240] waiting for startup goroutines ...
	I0603 04:24:58.327734    1052 start.go:254] writing updated cluster config ...
	I0603 04:24:58.331663    1052 out.go:177] 
	I0603 04:24:58.344561    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:24:58.344561    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:24:58.353970    1052 out.go:177] * Starting "ha-528700-m03" control-plane node in "ha-528700" cluster
	I0603 04:24:58.357125    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:24:58.357125    1052 cache.go:56] Caching tarball of preloaded images
	I0603 04:24:58.357842    1052 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:24:58.358182    1052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:24:58.358356    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:24:58.359578    1052 start.go:360] acquireMachinesLock for ha-528700-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:24:58.360557    1052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-528700-m03"
	I0603 04:24:58.360557    1052 start.go:93] Provisioning new machine with config: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:24:58.360557    1052 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0603 04:24:58.364188    1052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 04:24:58.365294    1052 start.go:159] libmachine.API.Create for "ha-528700" (driver="hyperv")
	I0603 04:24:58.365355    1052 client.go:168] LocalClient.Create starting
	I0603 04:24:58.365627    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 04:24:58.366133    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:24:58.366200    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:24:58.366457    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 04:24:58.366629    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:24:58.366629    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:24:58.366629    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 04:25:00.295482    1052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 04:25:00.295482    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:00.295482    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 04:25:02.053302    1052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 04:25:02.053932    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:02.053984    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:25:03.546741    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:25:03.546741    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:03.546829    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:25:07.372584    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:25:07.372584    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:07.374683    1052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 04:25:07.833384    1052 main.go:141] libmachine: Creating SSH key...
	I0603 04:25:08.057341    1052 main.go:141] libmachine: Creating VM...
	I0603 04:25:08.057341    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:25:11.021183    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:25:11.021183    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:11.021447    1052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 04:25:11.021529    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:25:12.818695    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:25:12.819032    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:12.819032    1052 main.go:141] libmachine: Creating VHD
	I0603 04:25:12.819155    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 04:25:16.660654    1052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4773904F-6D49-4129-8E2E-A2E8D56C24E4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 04:25:16.660654    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:16.660912    1052 main.go:141] libmachine: Writing magic tar header
	I0603 04:25:16.660912    1052 main.go:141] libmachine: Writing SSH key tar header
	I0603 04:25:16.671592    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 04:25:19.908955    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:19.908955    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:19.908955    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\disk.vhd' -SizeBytes 20000MB
	I0603 04:25:22.520085    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:22.520085    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:22.520779    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-528700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 04:25:26.314985    1052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-528700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 04:25:26.315306    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:26.315306    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-528700-m03 -DynamicMemoryEnabled $false
	I0603 04:25:28.649817    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:28.650564    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:28.650564    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-528700-m03 -Count 2
	I0603 04:25:30.868612    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:30.868612    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:30.868976    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-528700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\boot2docker.iso'
	I0603 04:25:33.519310    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:33.519396    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:33.519467    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-528700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\disk.vhd'
	I0603 04:25:36.219234    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:36.220156    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:36.220303    1052 main.go:141] libmachine: Starting VM...
	I0603 04:25:36.220374    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-528700-m03
	I0603 04:25:39.351010    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:39.351712    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:39.351774    1052 main.go:141] libmachine: Waiting for host to start...
	I0603 04:25:39.351836    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:41.721033    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:41.721033    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:41.721791    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:25:44.383469    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:44.383469    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:45.392893    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:47.698971    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:47.699283    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:47.699283    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:25:50.302813    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:50.302813    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:51.315684    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:53.688564    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:53.688564    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:53.688564    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:25:56.304146    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:56.304146    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:57.308097    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:59.590202    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:59.590202    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:59.590547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:02.172200    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:26:02.172200    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:03.186295    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:05.515620    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:05.515620    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:05.515620    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:08.134860    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:08.134860    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:08.134957    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:10.333035    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:10.333035    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:10.333035    1052 machine.go:94] provisionDockerMachine start ...
	I0603 04:26:10.333761    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:12.559616    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:12.559671    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:12.559671    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:15.191495    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:15.191610    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:15.196285    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:15.208858    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:15.208858    1052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:26:15.324340    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 04:26:15.324340    1052 buildroot.go:166] provisioning hostname "ha-528700-m03"
	I0603 04:26:15.324340    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:17.487362    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:17.487362    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:17.487584    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:20.104567    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:20.105291    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:20.111791    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:20.111945    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:20.111945    1052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-528700-m03 && echo "ha-528700-m03" | sudo tee /etc/hostname
	I0603 04:26:20.261026    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-528700-m03
	
	I0603 04:26:20.261142    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:22.433608    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:22.433608    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:22.433916    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:25.067691    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:25.067691    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:25.077562    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:25.077562    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:25.078490    1052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-528700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-528700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-528700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:26:25.227854    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:26:25.227930    1052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:26:25.228006    1052 buildroot.go:174] setting up certificates
	I0603 04:26:25.228006    1052 provision.go:84] configureAuth start
	I0603 04:26:25.228082    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:27.408753    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:27.409023    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:27.409123    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:30.043379    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:30.043598    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:30.043598    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:32.202329    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:32.202394    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:32.202527    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:34.821170    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:34.821170    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:34.821170    1052 provision.go:143] copyHostCerts
	I0603 04:26:34.821170    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:26:34.821170    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:26:34.821695    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:26:34.821772    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:26:34.823149    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:26:34.823149    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:26:34.823149    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:26:34.823692    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:26:34.825107    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:26:34.825107    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:26:34.825651    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:26:34.825950    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:26:34.826941    1052 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-528700-m03 san=[127.0.0.1 172.17.89.50 ha-528700-m03 localhost minikube]
	I0603 04:26:34.983621    1052 provision.go:177] copyRemoteCerts
	I0603 04:26:34.994021    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:26:34.994021    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:37.187853    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:37.187853    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:37.187853    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:39.767551    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:39.767551    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:39.767551    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:26:39.880778    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8867458s)
	I0603 04:26:39.880869    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:26:39.881092    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:26:39.929681    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:26:39.929681    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 04:26:39.985873    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:26:39.985873    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:26:40.032547    1052 provision.go:87] duration metric: took 14.804507s to configureAuth
	I0603 04:26:40.032547    1052 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:26:40.032547    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:26:40.032547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:42.225109    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:42.225456    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:42.225456    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:44.807336    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:44.807336    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:44.812506    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:44.813222    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:44.813222    1052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:26:44.937734    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:26:44.937886    1052 buildroot.go:70] root file system type: tmpfs
	I0603 04:26:44.938148    1052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:26:44.938245    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:47.085858    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:47.086116    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:47.086116    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:49.665857    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:49.666489    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:49.671866    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:49.672587    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:49.672587    1052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.88.175"
	Environment="NO_PROXY=172.17.88.175,172.17.84.187"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:26:49.827464    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.88.175
	Environment=NO_PROXY=172.17.88.175,172.17.84.187
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:26:49.828005    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:51.995200    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:51.995200    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:51.995200    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:54.617676    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:54.617676    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:54.623471    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:54.623830    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:54.623830    1052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:26:56.842660    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 04:26:56.842771    1052 machine.go:97] duration metric: took 46.5095178s to provisionDockerMachine
	I0603 04:26:56.842771    1052 client.go:171] duration metric: took 1m58.4771449s to LocalClient.Create
	I0603 04:26:56.842771    1052 start.go:167] duration metric: took 1m58.477206s to libmachine.API.Create "ha-528700"
	I0603 04:26:56.842956    1052 start.go:293] postStartSetup for "ha-528700-m03" (driver="hyperv")
	I0603 04:26:56.843019    1052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:26:56.855344    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:26:56.855344    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:59.008891    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:59.009395    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:59.009395    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:01.636014    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:01.636014    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:01.636626    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:27:01.749316    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8938673s)
	I0603 04:27:01.761289    1052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:27:01.767451    1052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:27:01.767451    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:27:01.768427    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:27:01.769425    1052 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:27:01.769425    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:27:01.780727    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 04:27:01.801161    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:27:01.851491    1052 start.go:296] duration metric: took 5.008523s for postStartSetup
	I0603 04:27:01.854438    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:04.001967    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:04.001967    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:04.002547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:06.641694    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:06.641776    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:06.642042    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:27:06.644607    1052 start.go:128] duration metric: took 2m8.2837565s to createHost
	I0603 04:27:06.644863    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:08.804466    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:08.804466    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:08.805263    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:11.409745    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:11.410423    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:11.415748    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:27:11.415748    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:27:11.415748    1052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:27:11.535658    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717414031.542400255
	
	I0603 04:27:11.535725    1052 fix.go:216] guest clock: 1717414031.542400255
	I0603 04:27:11.535782    1052 fix.go:229] Guest: 2024-06-03 04:27:11.542400255 -0700 PDT Remote: 2024-06-03 04:27:06.6446079 -0700 PDT m=+572.449375301 (delta=4.897792355s)
	I0603 04:27:11.535851    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:13.743131    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:13.743131    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:13.743439    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:16.370649    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:16.370649    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:16.378401    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:27:16.379040    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:27:16.379040    1052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717414031
	I0603 04:27:16.518862    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:27:11 UTC 2024
	
	I0603 04:27:16.518862    1052 fix.go:236] clock set: Mon Jun  3 11:27:11 UTC 2024
	 (err=<nil>)
	I0603 04:27:16.518970    1052 start.go:83] releasing machines lock for "ha-528700-m03", held for 2m18.1579884s
	I0603 04:27:16.519119    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:18.677116    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:18.677116    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:18.677352    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:21.287359    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:21.287652    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:21.290006    1052 out.go:177] * Found network options:
	I0603 04:27:21.293529    1052 out.go:177]   - NO_PROXY=172.17.88.175,172.17.84.187
	W0603 04:27:21.296389    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.296389    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:27:21.298475    1052 out.go:177]   - NO_PROXY=172.17.88.175,172.17.84.187
	W0603 04:27:21.300784    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.300784    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.302498    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.302498    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:27:21.305291    1052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:27:21.305448    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:21.315861    1052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 04:27:21.316402    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:23.566075    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:23.566075    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:23.567123    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:23.568009    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:23.568009    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:23.568541    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:26.318556    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:26.318643    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:26.318801    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:27:26.342545    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:26.342545    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:26.342545    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:27:26.502930    1052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1975011s)
	I0603 04:27:26.502930    1052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1863895s)
	W0603 04:27:26.502930    1052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:27:26.515893    1052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:27:26.547630    1052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 04:27:26.547630    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:27:26.547708    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:27:26.599028    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:27:26.630327    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:27:26.651502    1052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:27:26.668149    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:27:26.700216    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:27:26.731813    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:27:26.761109    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:27:26.793612    1052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:27:26.826773    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:27:26.858380    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:27:26.889858    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:27:26.923041    1052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:27:26.953033    1052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:27:26.992400    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:27.189127    1052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:27:27.220236    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:27:27.232156    1052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:27:27.271943    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:27:27.306217    1052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:27:27.357485    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:27:27.391398    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:27:27.441707    1052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 04:27:27.504317    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:27:27.530077    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:27:27.581211    1052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:27:27.597685    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:27:27.616198    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:27:27.657648    1052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:27:27.860622    1052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:27:28.051541    1052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:27:28.051641    1052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:27:28.100658    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:28.309198    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:27:30.837960    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5287558s)
	I0603 04:27:30.851317    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:27:30.888038    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:27:30.924182    1052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:27:31.126388    1052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:27:31.336754    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:31.546453    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:27:31.589730    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:27:31.626258    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:31.835733    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:27:31.954473    1052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:27:31.968963    1052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:27:31.977303    1052 start.go:562] Will wait 60s for crictl version
	I0603 04:27:31.989293    1052 ssh_runner.go:195] Run: which crictl
	I0603 04:27:32.006594    1052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:27:32.060331    1052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:27:32.068818    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:27:32.109869    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:27:32.141370    1052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:27:32.145230    1052 out.go:177]   - env NO_PROXY=172.17.88.175
	I0603 04:27:32.149136    1052 out.go:177]   - env NO_PROXY=172.17.88.175,172.17.84.187
	I0603 04:27:32.151758    1052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:27:32.159075    1052 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:27:32.159075    1052 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:27:32.171914    1052 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:27:32.178503    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:27:32.199798    1052 mustload.go:65] Loading cluster: ha-528700
	I0603 04:27:32.200378    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:27:32.201022    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:27:34.343900    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:34.343900    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:34.343900    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:27:34.344623    1052 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700 for IP: 172.17.89.50
	I0603 04:27:34.344623    1052 certs.go:194] generating shared ca certs ...
	I0603 04:27:34.344623    1052 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:27:34.345193    1052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:27:34.345422    1052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:27:34.345422    1052 certs.go:256] generating profile certs ...
	I0603 04:27:34.346456    1052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key
	I0603 04:27:34.346635    1052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a
	I0603 04:27:34.346796    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.88.175 172.17.84.187 172.17.89.50 172.17.95.254]
	I0603 04:27:34.527642    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a ...
	I0603 04:27:34.527642    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a: {Name:mk98650ae6e1a65b569fcd292aea4237111735de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:27:34.529712    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a ...
	I0603 04:27:34.529712    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a: {Name:mk677f98976c65fd93c890594ab73256d0d268dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:27:34.530952    1052 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt
	I0603 04:27:34.544971    1052 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key
	I0603 04:27:34.545949    1052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key
	I0603 04:27:34.545949    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:27:34.545949    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:27:34.547869    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:27:34.548014    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:27:34.548141    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:27:34.548801    1052 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:27:34.548801    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:27:34.548801    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:27:34.549802    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:27:34.549802    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:27:34.549802    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:27:34.549802    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:27:34.549802    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:27:34.550858    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:34.551131    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:27:36.719842    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:36.719969    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:36.720060    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:39.345721    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:27:39.345721    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:39.345721    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:27:39.454907    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 04:27:39.462390    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 04:27:39.498283    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 04:27:39.507554    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 04:27:39.538881    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 04:27:39.547013    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 04:27:39.580183    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 04:27:39.588152    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 04:27:39.622311    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 04:27:39.636346    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 04:27:39.670280    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 04:27:39.681040    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 04:27:39.702197    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:27:39.752550    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:27:39.798009    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:27:39.850446    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:27:39.901617    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 04:27:39.955460    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 04:27:40.007890    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:27:40.054382    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:27:40.105998    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:27:40.149714    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:27:40.195183    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:27:40.244674    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 04:27:40.276496    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 04:27:40.309274    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 04:27:40.343128    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 04:27:40.373125    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 04:27:40.405644    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 04:27:40.436770    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 04:27:40.481709    1052 ssh_runner.go:195] Run: openssl version
	I0603 04:27:40.502921    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:27:40.535046    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:40.542086    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:40.553425    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:40.574437    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:27:40.608950    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:27:40.638375    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:27:40.646857    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:27:40.661991    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:27:40.683841    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:27:40.719104    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:27:40.753551    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:27:40.760993    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:27:40.774040    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:27:40.794815    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:27:40.826385    1052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:27:40.836543    1052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 04:27:40.836543    1052 kubeadm.go:928] updating node {m03 172.17.89.50 8443 v1.30.1 docker true true} ...
	I0603 04:27:40.836543    1052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-528700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.89.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:27:40.837131    1052 kube-vip.go:115] generating kube-vip config ...
	I0603 04:27:40.850165    1052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 04:27:40.877319    1052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 04:27:40.877319    1052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 04:27:40.890218    1052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:27:40.912623    1052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 04:27:40.924520    1052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 04:27:40.949194    1052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 04:27:40.949194    1052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 04:27:40.949194    1052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 04:27:40.950062    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:27:40.950129    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:27:40.964147    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:27:40.965842    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:27:40.971614    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:27:40.990959    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:27:40.991100    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 04:27:40.991174    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 04:27:40.991174    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 04:27:40.991174    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 04:27:41.004122    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:27:41.058883    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 04:27:41.060291    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 04:27:42.249625    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 04:27:42.270026    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0603 04:27:42.312518    1052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:27:42.346923    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0603 04:27:42.403045    1052 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0603 04:27:42.409800    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:27:42.443745    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:42.651031    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:27:42.681662    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:27:42.682626    1052 start.go:316] joinCluster: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.89.50 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:27:42.682897    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 04:27:42.682952    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:27:44.872991    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:44.873957    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:44.874080    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:47.502082    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:27:47.502082    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:47.502972    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:27:47.715750    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0327335s)
	I0603 04:27:47.715830    1052 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.89.50 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:27:47.715886    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nx0soc.q0j32x6kkd97gdds --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m03 --control-plane --apiserver-advertise-address=172.17.89.50 --apiserver-bind-port=8443"
	I0603 04:28:31.930574    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nx0soc.q0j32x6kkd97gdds --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m03 --control-plane --apiserver-advertise-address=172.17.89.50 --apiserver-bind-port=8443": (44.2145106s)
	I0603 04:28:31.931263    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 04:28:32.718364    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-528700-m03 minikube.k8s.io/updated_at=2024_06_03T04_28_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-528700 minikube.k8s.io/primary=false
	I0603 04:28:32.897094    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-528700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 04:28:33.117772    1052 start.go:318] duration metric: took 50.4350499s to joinCluster
	I0603 04:28:33.117772    1052 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.89.50 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:28:33.121654    1052 out.go:177] * Verifying Kubernetes components...
	I0603 04:28:33.118659    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:28:33.138661    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:28:33.657250    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:28:33.690383    1052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:28:33.691112    1052 kapi.go:59] client config for ha-528700: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 04:28:33.691112    1052 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.88.175:8443
	I0603 04:28:33.692221    1052 node_ready.go:35] waiting up to 6m0s for node "ha-528700-m03" to be "Ready" ...
	I0603 04:28:33.692355    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:33.692355    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:33.692355    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:33.692468    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:33.707016    1052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 04:28:34.193603    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:34.193603    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:34.193603    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:34.193603    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:34.199451    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:34.701281    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:34.701329    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:34.701329    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:34.701360    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:34.705636    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:35.193849    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:35.193964    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:35.194020    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:35.194020    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:35.199336    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:35.698766    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:35.698766    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:35.698766    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:35.698766    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:35.702954    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:35.705141    1052 node_ready.go:53] node "ha-528700-m03" has status "Ready":"False"
	I0603 04:28:36.206400    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:36.206400    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:36.206477    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:36.206477    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:36.211226    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:36.696301    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:36.696375    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:36.696375    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:36.696375    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:36.701826    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:37.205200    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:37.205288    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:37.205288    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:37.205288    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:37.209869    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:37.697807    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:37.698015    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:37.698015    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:37.698015    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:37.702854    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:38.192799    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:38.192799    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:38.192799    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:38.192799    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:38.355035    1052 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0603 04:28:38.356923    1052 node_ready.go:53] node "ha-528700-m03" has status "Ready":"False"
	I0603 04:28:38.707697    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:38.707774    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:38.707774    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:38.707774    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:38.712492    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:39.207412    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:39.207412    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:39.207412    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:39.207412    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:39.212473    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:39.693230    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:39.693280    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:39.693280    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:39.693280    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:39.702332    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:28:40.196586    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:40.196844    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:40.196844    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:40.196844    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:40.203172    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:40.697913    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:40.697913    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:40.697913    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:40.697913    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:40.702347    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:40.703048    1052 node_ready.go:53] node "ha-528700-m03" has status "Ready":"False"
	I0603 04:28:41.200350    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:41.200350    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:41.200643    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:41.200643    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:41.206103    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:41.699558    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:41.699810    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:41.699810    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:41.699810    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:41.704509    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.204235    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:42.204422    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.204422    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.204422    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.209071    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.210288    1052 node_ready.go:49] node "ha-528700-m03" has status "Ready":"True"
	I0603 04:28:42.210349    1052 node_ready.go:38] duration metric: took 8.5180503s for node "ha-528700-m03" to be "Ready" ...
	I0603 04:28:42.210349    1052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:28:42.210492    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:42.210492    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.210492    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.210492    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.228074    1052 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0603 04:28:42.238194    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.238194    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f6tv8
	I0603 04:28:42.238194    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.238194    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.238194    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.243192    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.244493    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:42.244493    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.244493    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.244493    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.248801    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.249724    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.249777    1052 pod_ready.go:81] duration metric: took 11.5834ms for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.249777    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.249889    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qwkq9
	I0603 04:28:42.249889    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.249889    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.249889    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.255356    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:42.257252    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:42.257252    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.257378    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.257378    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.260634    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.262033    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.262033    1052 pod_ready.go:81] duration metric: took 12.2555ms for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.262033    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.262033    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700
	I0603 04:28:42.262033    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.262033    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.262033    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.265647    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.267024    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:42.267093    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.267093    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.267093    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.270351    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.271881    1052 pod_ready.go:92] pod "etcd-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.271881    1052 pod_ready.go:81] duration metric: took 9.8481ms for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.271881    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.271960    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:28:42.272061    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.272061    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.272061    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.275312    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.276340    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:42.276398    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.276398    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.276398    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.280661    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.282242    1052 pod_ready.go:92] pod "etcd-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.282297    1052 pod_ready.go:81] duration metric: took 10.2823ms for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.282297    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.407452    1052 request.go:629] Waited for 125.0198ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:42.407634    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:42.407634    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.407634    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.407634    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.414792    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:28:42.610442    1052 request.go:629] Waited for 194.4642ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:42.610792    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:42.610792    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.610792    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.610792    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.617377    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:42.813171    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:42.813171    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.813379    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.813379    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.818428    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:43.015598    1052 request.go:629] Waited for 195.3527ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.015598    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.015598    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.015598    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.015598    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.021328    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:43.296053    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:43.296325    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.296325    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.296325    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.299643    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:43.406394    1052 request.go:629] Waited for 103.4776ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.406726    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.406726    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.406726    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.406890    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.411102    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:43.782655    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:43.783342    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.783342    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.783342    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.796559    1052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 04:28:43.813534    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.813788    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.813788    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.813788    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.817968    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:44.287600    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:44.292027    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.292027    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.292027    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.297564    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:44.298199    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:44.298199    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.298744    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.298744    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.303047    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:44.304504    1052 pod_ready.go:102] pod "etcd-ha-528700-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 04:28:44.788824    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:44.788824    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.788824    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.788824    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.793650    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:44.795489    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:44.795489    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.795547    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.795547    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.800326    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.289895    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:45.289951    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.289951    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.289951    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.303749    1052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 04:28:45.305105    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:45.305245    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.305245    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.305245    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.311209    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:45.790660    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:45.790660    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.790660    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.790660    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.795270    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.797380    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:45.797380    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.797466    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.797466    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.801993    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.802279    1052 pod_ready.go:92] pod "etcd-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:45.802279    1052 pod_ready.go:81] duration metric: took 3.5199738s for pod "etcd-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:45.802279    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:45.802821    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700
	I0603 04:28:45.802821    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.802821    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.802821    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.807122    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.808530    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:45.808530    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.808530    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.808530    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.814973    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:45.816334    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:45.816367    1052 pod_ready.go:81] duration metric: took 14.0877ms for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:45.816367    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:46.008249    1052 request.go:629] Waited for 191.8209ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m02
	I0603 04:28:46.008586    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m02
	I0603 04:28:46.008586    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.008680    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.008680    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.015023    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:46.212992    1052 request.go:629] Waited for 196.6882ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:46.213125    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:46.213263    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.213263    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.213263    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.218774    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:46.220025    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:46.220025    1052 pod_ready.go:81] duration metric: took 403.657ms for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:46.220025    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:46.415440    1052 request.go:629] Waited for 195.1327ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.415524    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.415524    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.415524    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.415601    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.421948    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:46.618959    1052 request.go:629] Waited for 195.8944ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:46.618959    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:46.618959    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.618959    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.618959    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.624705    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:46.806263    1052 request.go:629] Waited for 77.7386ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.806563    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.806563    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.806563    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.806563    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.813002    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:47.009971    1052 request.go:629] Waited for 195.7123ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:47.010314    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:47.010314    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.010314    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.010314    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.014906    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:47.015724    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:47.015724    1052 pod_ready.go:81] duration metric: took 795.6979ms for pod "kube-apiserver-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.015724    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.212350    1052 request.go:629] Waited for 196.461ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700
	I0603 04:28:47.212427    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700
	I0603 04:28:47.212560    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.212560    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.212560    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.217705    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:47.414407    1052 request.go:629] Waited for 195.2934ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:47.414407    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:47.414407    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.414407    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.414407    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.421885    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:28:47.423518    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:47.423518    1052 pod_ready.go:81] duration metric: took 407.7931ms for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.423586    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.617277    1052 request.go:629] Waited for 193.6227ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:28:47.617587    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:28:47.617587    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.617587    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.617587    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.622760    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:47.805527    1052 request.go:629] Waited for 180.9864ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:47.805746    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:47.805807    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.805807    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.805807    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.810432    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:47.811844    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:47.811896    1052 pod_ready.go:81] duration metric: took 388.3091ms for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.811896    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.009334    1052 request.go:629] Waited for 197.3039ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m03
	I0603 04:28:48.009522    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m03
	I0603 04:28:48.009522    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.009522    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.009640    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.014853    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:48.213114    1052 request.go:629] Waited for 196.4653ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:48.213313    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:48.213313    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.213313    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.213313    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.217681    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:48.218748    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:48.218748    1052 pod_ready.go:81] duration metric: took 406.8513ms for pod "kube-controller-manager-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.218824    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.417855    1052 request.go:629] Waited for 198.5257ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:28:48.418377    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:28:48.418377    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.418377    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.418377    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.423207    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:48.604863    1052 request.go:629] Waited for 180.5953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:48.604932    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:48.605035    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.605035    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.605035    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.611880    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:48.612834    1052 pod_ready.go:92] pod "kube-proxy-dbr56" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:48.612834    1052 pod_ready.go:81] duration metric: took 393.9487ms for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.612834    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fggr6" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.809657    1052 request.go:629] Waited for 196.4992ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fggr6
	I0603 04:28:48.809891    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fggr6
	I0603 04:28:48.809891    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.809994    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.810038    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.815897    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:49.013529    1052 request.go:629] Waited for 196.4179ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:49.013529    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:49.013759    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.013759    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.013759    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.017946    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:49.019167    1052 pod_ready.go:92] pod "kube-proxy-fggr6" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:49.019167    1052 pod_ready.go:81] duration metric: took 406.332ms for pod "kube-proxy-fggr6" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.019167    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.216456    1052 request.go:629] Waited for 196.5445ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:28:49.216846    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:28:49.216939    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.216939    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.216939    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.221512    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:49.419829    1052 request.go:629] Waited for 197.8562ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:49.419829    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:49.419829    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.419829    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.419829    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.425571    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:49.426919    1052 pod_ready.go:92] pod "kube-proxy-wlzrp" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:49.426919    1052 pod_ready.go:81] duration metric: took 407.7509ms for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.426990    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.608397    1052 request.go:629] Waited for 180.9961ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:28:49.608577    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:28:49.608652    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.608652    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.608652    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.613842    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:49.810719    1052 request.go:629] Waited for 195.9362ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:49.811287    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:49.811287    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.811287    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.811287    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.815700    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:49.817626    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:49.817626    1052 pod_ready.go:81] duration metric: took 390.6353ms for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.817626    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.015259    1052 request.go:629] Waited for 197.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:28:50.015776    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:28:50.015776    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.015776    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.015847    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.024914    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:28:50.219138    1052 request.go:629] Waited for 193.1934ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:50.219443    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:50.219443    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.219443    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.219443    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.225129    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:50.227080    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:50.227080    1052 pod_ready.go:81] duration metric: took 409.4527ms for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.227080    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.406002    1052 request.go:629] Waited for 178.9216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m03
	I0603 04:28:50.406358    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m03
	I0603 04:28:50.406358    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.406358    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.406358    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.411605    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:50.608807    1052 request.go:629] Waited for 195.5637ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:50.608948    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:50.608948    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.608948    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.608948    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.614449    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:50.616451    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:50.616510    1052 pod_ready.go:81] duration metric: took 389.4294ms for pod "kube-scheduler-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.616510    1052 pod_ready.go:38] duration metric: took 8.4061423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:28:50.616610    1052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 04:28:50.629236    1052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:28:50.663509    1052 api_server.go:72] duration metric: took 17.5456968s to wait for apiserver process to appear ...
	I0603 04:28:50.663509    1052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 04:28:50.663509    1052 api_server.go:253] Checking apiserver healthz at https://172.17.88.175:8443/healthz ...
	I0603 04:28:50.671322    1052 api_server.go:279] https://172.17.88.175:8443/healthz returned 200:
	ok
	I0603 04:28:50.671464    1052 round_trippers.go:463] GET https://172.17.88.175:8443/version
	I0603 04:28:50.671489    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.671489    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.671489    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.672657    1052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:28:50.673289    1052 api_server.go:141] control plane version: v1.30.1
	I0603 04:28:50.673289    1052 api_server.go:131] duration metric: took 9.7801ms to wait for apiserver health ...
	I0603 04:28:50.673289    1052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 04:28:50.812535    1052 request.go:629] Waited for 138.9034ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:50.812725    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:50.812725    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.812725    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.812725    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.823419    1052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 04:28:50.833503    1052 system_pods.go:59] 24 kube-system pods found
	I0603 04:28:50.833503    1052 system_pods.go:61] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "etcd-ha-528700-m03" [9971b938-e085-42f9-83b7-f868d3ac29e3] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kindnet-m9x6v" [77ce9a12-df3d-4bcc-9a1f-dc34158d2c75] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-apiserver-ha-528700-m03" [0498e9ff-f11f-4c0b-bd0a-d2a21b9c37b5] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-controller-manager-ha-528700-m03" [c8a8819a-e8cf-4123-b353-55364fa738c5] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-proxy-fggr6" [13f51aa0-f497-4fed-af63-8358e0a6ee9c] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-scheduler-ha-528700-m03" [59a02823-6fef-44f0-90a1-ff4f87eb9a3b] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-vip-ha-528700-m03" [b7b8c197-df95-441d-a014-21827c9c2fb0] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:28:50.833503    1052 system_pods.go:74] duration metric: took 160.2135ms to wait for pod list to return data ...
	I0603 04:28:50.833503    1052 default_sa.go:34] waiting for default service account to be created ...
	I0603 04:28:51.016152    1052 request.go:629] Waited for 182.4034ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:28:51.016240    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:28:51.016240    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:51.016240    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:51.016240    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:51.020683    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:51.021442    1052 default_sa.go:45] found service account: "default"
	I0603 04:28:51.021442    1052 default_sa.go:55] duration metric: took 187.9389ms for default service account to be created ...
	I0603 04:28:51.021442    1052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 04:28:51.218196    1052 request.go:629] Waited for 196.7533ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:51.218376    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:51.218376    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:51.218503    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:51.218604    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:51.228880    1052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 04:28:51.238955    1052 system_pods.go:86] 24 kube-system pods found
	I0603 04:28:51.238955    1052 system_pods.go:89] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "etcd-ha-528700-m03" [9971b938-e085-42f9-83b7-f868d3ac29e3] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kindnet-m9x6v" [77ce9a12-df3d-4bcc-9a1f-dc34158d2c75] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-apiserver-ha-528700-m03" [0498e9ff-f11f-4c0b-bd0a-d2a21b9c37b5] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-controller-manager-ha-528700-m03" [c8a8819a-e8cf-4123-b353-55364fa738c5] Running
	I0603 04:28:51.239539    1052 system_pods.go:89] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:28:51.239539    1052 system_pods.go:89] "kube-proxy-fggr6" [13f51aa0-f497-4fed-af63-8358e0a6ee9c] Running
	I0603 04:28:51.239539    1052 system_pods.go:89] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-scheduler-ha-528700-m03" [59a02823-6fef-44f0-90a1-ff4f87eb9a3b] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-vip-ha-528700-m03" [b7b8c197-df95-441d-a014-21827c9c2fb0] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:28:51.239670    1052 system_pods.go:126] duration metric: took 218.2277ms to wait for k8s-apps to be running ...
	I0603 04:28:51.239699    1052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 04:28:51.251803    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:28:51.279143    1052 system_svc.go:56] duration metric: took 39.4199ms WaitForService to wait for kubelet
	I0603 04:28:51.279143    1052 kubeadm.go:576] duration metric: took 18.1613298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:28:51.279213    1052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 04:28:51.405106    1052 request.go:629] Waited for 125.3144ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes
	I0603 04:28:51.405273    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes
	I0603 04:28:51.405273    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:51.405357    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:51.405357    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:51.413502    1052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 04:28:51.416648    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:28:51.416780    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:28:51.416780    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:28:51.416780    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:28:51.416780    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:28:51.416780    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:28:51.416873    1052 node_conditions.go:105] duration metric: took 137.6604ms to run NodePressure ...
	I0603 04:28:51.416948    1052 start.go:240] waiting for startup goroutines ...
	I0603 04:28:51.417004    1052 start.go:254] writing updated cluster config ...
	I0603 04:28:51.429825    1052 ssh_runner.go:195] Run: rm -f paused
	I0603 04:28:51.568920    1052 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 04:28:51.573608    1052 out.go:177] * Done! kubectl is now configured to use "ha-528700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 11:21:04 ha-528700 dockerd[1334]: time="2024-06-03T11:21:04.634933417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:04 ha-528700 dockerd[1334]: time="2024-06-03T11:21:04.685345019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:21:04 ha-528700 dockerd[1334]: time="2024-06-03T11:21:04.685578221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:21:04 ha-528700 dockerd[1334]: time="2024-06-03T11:21:04.685596421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:04 ha-528700 dockerd[1334]: time="2024-06-03T11:21:04.685711722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:04 ha-528700 cri-dockerd[1233]: time="2024-06-03T11:21:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8fa51718f47e24f2b9540a130e224c85128ed96c53e9fe536b65179d7b7df5c7/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 11:21:05 ha-528700 cri-dockerd[1233]: time="2024-06-03T11:21:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/638df5069b9c2d1d1ab8281df3e46b5d3b517b1008dccae2ba514944c8c9376a/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119041085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119109985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119123785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119220486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.369799672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.370273475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.370735378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.371346281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.281822842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.282011741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.282033541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.282757740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:31 ha-528700 cri-dockerd[1233]: time="2024-06-03T11:29:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef4916f63c2572e500af0e435ad66fa844055789a54996445d44f5e45da81067/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 11:29:32 ha-528700 cri-dockerd[1233]: time="2024-06-03T11:29:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115184393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115278593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115293993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115397794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8aac137d2078d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   ef4916f63c257       busybox-fc5497c4f-np7rl
	e337c58c541be       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   638df5069b9c2       coredns-7db6d8ff4d-qwkq9
	2a6bf989eb78f       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   8fa51718f47e2       storage-provisioner
	3f2ce3288a437       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   fceba7a162c21       coredns-7db6d8ff4d-f6tv8
	545c59933594b       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Running             kindnet-cni               0                   ab6dcc7849e12       kindnet-b247z
	eeac3b42fbc22       747097150317f                                                                                         9 minutes ago        Running             kube-proxy                0                   e5ccb93689142       kube-proxy-dbr56
	3fbe4523644ae       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   1700399e7e214       kube-vip-ha-528700
	ed3e2e6ea4df3       25a1387cdab82                                                                                         10 minutes ago       Running             kube-controller-manager   0                   4673e27399785       kube-controller-manager-ha-528700
	7dce0e761e834       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   dbc6ba1c0ac40       etcd-ha-528700
	7528ad5d62047       a52dc94f0a912                                                                                         10 minutes ago       Running             kube-scheduler            0                   326cf3a1b3414       kube-scheduler-ha-528700
	10075ba4eda88       91be940803172                                                                                         10 minutes ago       Running             kube-apiserver            0                   4b60d234d135c       kube-apiserver-ha-528700
	
	
	==> coredns [3f2ce3288a43] <==
	[INFO] 10.244.2.2:33305 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011562319s
	[INFO] 10.244.2.2:60267 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001525s
	[INFO] 10.244.2.2:60436 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182401s
	[INFO] 10.244.1.2:45066 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000933s
	[INFO] 10.244.1.2:49898 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001287s
	[INFO] 10.244.1.2:39543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000440901s
	[INFO] 10.244.1.2:37707 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000155701s
	[INFO] 10.244.0.4:57657 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000319001s
	[INFO] 10.244.0.4:54536 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001162s
	[INFO] 10.244.0.4:54212 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.005736709s
	[INFO] 10.244.2.2:54815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202s
	[INFO] 10.244.2.2:53251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001682s
	[INFO] 10.244.2.2:45061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000186301s
	[INFO] 10.244.1.2:44264 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001937s
	[INFO] 10.244.1.2:33181 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001814s
	[INFO] 10.244.1.2:37345 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001367s
	[INFO] 10.244.0.4:55312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208201s
	[INFO] 10.244.0.4:43313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001737s
	[INFO] 10.244.0.4:57390 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001376s
	[INFO] 10.244.0.4:60067 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002004s
	[INFO] 10.244.2.2:38692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220101s
	[INFO] 10.244.2.2:44288 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000243501s
	[INFO] 10.244.1.2:36361 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001997s
	[INFO] 10.244.1.2:34253 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000877s
	[INFO] 10.244.0.4:48401 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000253101s
	
	
	==> coredns [e337c58c541b] <==
	[INFO] 127.0.0.1:40581 - 14972 "HINFO IN 3959985873406318438.8433902953276015444. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034571418s
	[INFO] 10.244.2.2:59139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.005645009s
	[INFO] 10.244.0.4:47366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000324901s
	[INFO] 10.244.0.4:43790 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001493303s
	[INFO] 10.244.0.4:57180 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.072006613s
	[INFO] 10.244.2.2:53854 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002349s
	[INFO] 10.244.2.2:49891 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001432s
	[INFO] 10.244.1.2:38448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002266s
	[INFO] 10.244.1.2:43391 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284501s
	[INFO] 10.244.1.2:50524 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061s
	[INFO] 10.244.1.2:48059 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001926s
	[INFO] 10.244.0.4:41207 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002291s
	[INFO] 10.244.0.4:52826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013039621s
	[INFO] 10.244.0.4:47414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000265501s
	[INFO] 10.244.0.4:53717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214201s
	[INFO] 10.244.0.4:37365 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001134s
	[INFO] 10.244.2.2:60828 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001446s
	[INFO] 10.244.1.2:33790 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001435s
	[INFO] 10.244.2.2:44374 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001982s
	[INFO] 10.244.2.2:60223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001125s
	[INFO] 10.244.1.2:47096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001271s
	[INFO] 10.244.1.2:46573 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000180201s
	[INFO] 10.244.0.4:57331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001275s
	[INFO] 10.244.0.4:56864 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000626s
	[INFO] 10.244.0.4:60853 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000757s
	
	
	==> describe nodes <==
	Name:               ha-528700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T04_20_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:20:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:30:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:29:50 +0000   Mon, 03 Jun 2024 11:20:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:29:50 +0000   Mon, 03 Jun 2024 11:20:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:29:50 +0000   Mon, 03 Jun 2024 11:20:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:29:50 +0000   Mon, 03 Jun 2024 11:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.88.175
	  Hostname:    ha-528700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 867fa0169e944c39ab4f9d2356c523db
	  System UUID:                e9e49675-4f1e-4643-9f41-a8c6e6f0faf7
	  Boot ID:                    12b1a7a0-fc13-47d3-9ff0-c7ad1a0dfbf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np7rl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 coredns-7db6d8ff4d-f6tv8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m41s
	  kube-system                 coredns-7db6d8ff4d-qwkq9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m41s
	  kube-system                 etcd-ha-528700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m56s
	  kube-system                 kindnet-b247z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m41s
	  kube-system                 kube-apiserver-ha-528700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-controller-manager-ha-528700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-proxy-dbr56                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	  kube-system                 kube-scheduler-ha-528700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-vip-ha-528700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m40s  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-528700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m55s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m55s  kubelet          Node ha-528700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m55s  kubelet          Node ha-528700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m55s  kubelet          Node ha-528700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m42s  node-controller  Node ha-528700 event: Registered Node ha-528700 in Controller
	  Normal  NodeReady                9m32s  kubelet          Node ha-528700 status is now: NodeReady
	  Normal  RegisteredNode           5m40s  node-controller  Node ha-528700 event: Registered Node ha-528700 in Controller
	  Normal  RegisteredNode           107s   node-controller  Node ha-528700 event: Registered Node ha-528700 in Controller
	
	
	Name:               ha-528700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T04_24_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:24:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:30:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:30:10 +0000   Mon, 03 Jun 2024 11:24:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:30:10 +0000   Mon, 03 Jun 2024 11:24:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:30:10 +0000   Mon, 03 Jun 2024 11:24:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:30:10 +0000   Mon, 03 Jun 2024 11:24:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.84.187
	  Hostname:    ha-528700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6896cf2650f4ab1b2fc4fc4d5a4a779
	  System UUID:                9df023a1-46d6-9d47-90f6-a62a2438553a
	  Boot ID:                    8fed1965-c792-451f-9e6f-cbe02ddb8e94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hd7gx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-528700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m58s
	  kube-system                 kindnet-g475v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-528700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-controller-manager-ha-528700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-proxy-wlzrp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-528700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-vip-ha-528700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet          Node ha-528700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet          Node ha-528700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet          Node ha-528700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m58s                node-controller  Node ha-528700-m02 event: Registered Node ha-528700-m02 in Controller
	  Normal  RegisteredNode           5m41s                node-controller  Node ha-528700-m02 event: Registered Node ha-528700-m02 in Controller
	  Normal  RegisteredNode           108s                 node-controller  Node ha-528700-m02 event: Registered Node ha-528700-m02 in Controller
	
	
	Name:               ha-528700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T04_28_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:28:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:30:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:29:58 +0000   Mon, 03 Jun 2024 11:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:29:58 +0000   Mon, 03 Jun 2024 11:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:29:58 +0000   Mon, 03 Jun 2024 11:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:29:58 +0000   Mon, 03 Jun 2024 11:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.89.50
	  Hostname:    ha-528700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 03139da24ec642c79bb348ceaf512292
	  System UUID:                51f24894-b999-ee44-9796-5032cc45e0e1
	  Boot ID:                    8d86378c-bcfc-4115-8889-05350921e2c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bz4xm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-528700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m6s
	  kube-system                 kindnet-m9x6v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m10s
	  kube-system                 kube-apiserver-ha-528700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-controller-manager-ha-528700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-proxy-fggr6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-scheduler-ha-528700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-vip-ha-528700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m11s)  kubelet          Node ha-528700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m11s)  kubelet          Node ha-528700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m11s)  kubelet          Node ha-528700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-528700-m03 event: Registered Node ha-528700-m03 in Controller
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-528700-m03 event: Registered Node ha-528700-m03 in Controller
	  Normal  RegisteredNode           108s                   node-controller  Node ha-528700-m03 event: Registered Node ha-528700-m03 in Controller
	
	
	==> dmesg <==
	[  +1.227576] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.004599] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 11:19] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.191658] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Jun 3 11:20] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.101566] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.540796] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.188931] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.229265] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.790155] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.174359] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.187962] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.263943] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[ +11.269834] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.102606] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.407460] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +6.451804] systemd-fstab-generator[1724]: Ignoring "noauto" option for root device
	[  +0.103095] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.695159] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.629666] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[ +14.937078] kauditd_printk_skb: 17 callbacks suppressed
	[Jun 3 11:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.857066] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [7dce0e761e83] <==
	{"level":"info","ts":"2024-06-03T11:28:28.804803Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6ddfdcb93034918c","to":"b704b5f16cff4cfe","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-03T11:28:28.805158Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6ddfdcb93034918c","remote-peer-id":"b704b5f16cff4cfe"}
	{"level":"info","ts":"2024-06-03T11:28:28.821646Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6ddfdcb93034918c","to":"b704b5f16cff4cfe","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-03T11:28:28.821766Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6ddfdcb93034918c","remote-peer-id":"b704b5f16cff4cfe"}
	{"level":"warn","ts":"2024-06-03T11:28:29.219988Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b704b5f16cff4cfe","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-03T11:28:30.219196Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b704b5f16cff4cfe","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-03T11:28:31.219476Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b704b5f16cff4cfe","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-03T11:28:31.517328Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b704b5f16cff4cfe","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"32.616015ms"}
	{"level":"warn","ts":"2024-06-03T11:28:31.51773Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"7d78f440bc9e3f64","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"33.021415ms"}
	{"level":"info","ts":"2024-06-03T11:28:31.525069Z","caller":"traceutil/trace.go:171","msg":"trace[1288903451] transaction","detail":"{read_only:false; response_revision:1482; number_of_response:1; }","duration":"167.774295ms","start":"2024-06-03T11:28:31.357277Z","end":"2024-06-03T11:28:31.525052Z","steps":["trace[1288903451] 'process raft request'  (duration: 160.840503ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:28:31.726704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ddfdcb93034918c switched to configuration voters=(7917289357876433292 9041244810825842532 13187865657368071422)"}
	{"level":"info","ts":"2024-06-03T11:28:31.726848Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"4ab12054976b5444","local-member-id":"6ddfdcb93034918c"}
	{"level":"info","ts":"2024-06-03T11:28:31.726883Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"6ddfdcb93034918c","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b704b5f16cff4cfe"}
	{"level":"warn","ts":"2024-06-03T11:28:38.360026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.985081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.17.88.175\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-06-03T11:28:38.360166Z","caller":"traceutil/trace.go:171","msg":"trace[779845423] range","detail":"{range_begin:/registry/masterleases/172.17.88.175; range_end:; response_count:1; response_revision:1543; }","duration":"179.15888ms","start":"2024-06-03T11:28:38.180992Z","end":"2024-06-03T11:28:38.360151Z","steps":["trace[779845423] 'range keys from in-memory index tree'  (duration: 177.134683ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T11:28:38.360837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.453106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-528700-m03\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-06-03T11:28:38.360868Z","caller":"traceutil/trace.go:171","msg":"trace[1029420407] range","detail":"{range_begin:/registry/minions/ha-528700-m03; range_end:; response_count:1; response_revision:1543; }","duration":"158.486305ms","start":"2024-06-03T11:28:38.202375Z","end":"2024-06-03T11:28:38.360861Z","steps":["trace[1029420407] 'range keys from in-memory index tree'  (duration: 157.171807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T11:28:38.529348Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b704b5f16cff4cfe","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.298898ms"}
	{"level":"warn","ts":"2024-06-03T11:28:38.529661Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"7d78f440bc9e3f64","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.613097ms"}
	{"level":"info","ts":"2024-06-03T11:28:38.659042Z","caller":"traceutil/trace.go:171","msg":"trace[1853894374] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"126.225745ms","start":"2024-06-03T11:28:38.532799Z","end":"2024-06-03T11:28:38.659025Z","steps":["trace[1853894374] 'process raft request'  (duration: 125.914946ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:28:38.664578Z","caller":"traceutil/trace.go:171","msg":"trace[1596552564] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"129.424141ms","start":"2024-06-03T11:28:38.535141Z","end":"2024-06-03T11:28:38.664565Z","steps":["trace[1596552564] 'process raft request'  (duration: 129.333742ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:29:31.897744Z","caller":"traceutil/trace.go:171","msg":"trace[1742368414] transaction","detail":"{read_only:false; response_revision:1804; number_of_response:1; }","duration":"180.446531ms","start":"2024-06-03T11:29:31.717283Z","end":"2024-06-03T11:29:31.897729Z","steps":["trace[1742368414] 'process raft request'  (duration: 180.373631ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T11:30:33.674523Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1045}
	{"level":"info","ts":"2024-06-03T11:30:33.796216Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1045,"took":"121.328162ms","hash":1503946145,"current-db-size-bytes":3620864,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-06-03T11:30:33.796553Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1503946145,"revision":1045,"compact-revision":-1}
	
	
	==> kernel <==
	 11:30:36 up 12 min,  0 users,  load average: 0.71, 0.63, 0.37
	Linux ha-528700 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [545c59933594] <==
	I0603 11:29:53.237384       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:30:03.250054       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:30:03.250216       1 main.go:227] handling current node
	I0603 11:30:03.250231       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:30:03.250238       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:03.250756       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:30:03.251018       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:30:13.267121       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:30:13.267225       1 main.go:227] handling current node
	I0603 11:30:13.267242       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:30:13.267249       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:13.268109       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:30:13.268224       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:30:23.277565       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:30:23.277659       1 main.go:227] handling current node
	I0603 11:30:23.277674       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:30:23.277682       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:23.278054       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:30:23.278087       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:30:33.294244       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:30:33.294684       1 main.go:227] handling current node
	I0603 11:30:33.294804       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:30:33.294839       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:30:33.295081       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:30:33.295212       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [10075ba4eda8] <==
	I0603 11:20:54.086211       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 11:28:18.004907       1 trace.go:236] Trace[1417259662]: "Update" accept:application/json, */*,audit-id:46201383-cfef-47df-94fc-ccd55b7d08a2,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (03-Jun-2024 11:28:17.377) (total time: 601ms):
	Trace[1417259662]: ["GuaranteedUpdate etcd3" audit-id:46201383-cfef-47df-94fc-ccd55b7d08a2,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 601ms (11:28:17.378)
	Trace[1417259662]:  ---"Txn call completed" 600ms (11:28:17.978)]
	Trace[1417259662]: [601.292272ms] [601.292272ms] END
	E0603 11:28:26.916721       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0603 11:28:26.922144       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0603 11:28:26.922245       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0603 11:28:26.964833       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0603 11:28:26.965159       1 timeout.go:142] post-timeout activity - time-elapsed: 83.108399ms, PATCH "/api/v1/namespaces/default/events/ha-528700-m03.17d57b07b630573f" result: <nil>
	E0603 11:29:37.231645       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57569: use of closed network connection
	E0603 11:29:37.702773       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57571: use of closed network connection
	E0603 11:29:39.230641       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57574: use of closed network connection
	E0603 11:29:39.710008       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57576: use of closed network connection
	E0603 11:29:40.160273       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57578: use of closed network connection
	E0603 11:29:40.636776       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57580: use of closed network connection
	E0603 11:29:41.101364       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57582: use of closed network connection
	E0603 11:29:41.551316       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57584: use of closed network connection
	E0603 11:29:41.987023       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57586: use of closed network connection
	E0603 11:29:42.777950       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57589: use of closed network connection
	E0603 11:29:53.204971       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57591: use of closed network connection
	E0603 11:29:53.675767       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57594: use of closed network connection
	E0603 11:30:04.129134       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57596: use of closed network connection
	E0603 11:30:04.552525       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57599: use of closed network connection
	E0603 11:30:14.985146       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57601: use of closed network connection
	
	
	==> kube-controller-manager [ed3e2e6ea4df] <==
	I0603 11:24:33.775715       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-528700-m02" podCIDRs=["10.244.1.0/24"]
	I0603 11:24:38.542766       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-528700-m02"
	I0603 11:28:26.047413       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-528700-m03\" does not exist"
	I0603 11:28:26.080618       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-528700-m03" podCIDRs=["10.244.2.0/24"]
	I0603 11:28:28.617507       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-528700-m03"
	I0603 11:29:29.522789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.170646ms"
	I0603 11:29:29.692006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="168.844088ms"
	I0603 11:29:30.057768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="362.931544ms"
	E0603 11:29:30.057856       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:29:30.135358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.157503ms"
	I0603 11:29:30.140095       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="796.099µs"
	I0603 11:29:30.646316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.1µs"
	I0603 11:29:31.647591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="314.3µs"
	I0603 11:29:31.667403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52µs"
	I0603 11:29:31.682789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="145.6µs"
	I0603 11:29:31.696552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.6µs"
	I0603 11:29:31.711253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.4µs"
	I0603 11:29:31.901681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165µs"
	I0603 11:29:33.246178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.968328ms"
	I0603 11:29:33.246649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="410.901µs"
	I0603 11:29:33.636500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.604361ms"
	I0603 11:29:33.637734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="164.8µs"
	I0603 11:29:33.717803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.6µs"
	I0603 11:29:34.726060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.485023ms"
	I0603 11:29:34.726577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.2µs"
	
	
	==> kube-proxy [eeac3b42fbc2] <==
	I0603 11:20:55.224744       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:20:55.247372       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.88.175"]
	I0603 11:20:55.343996       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:20:55.344060       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:20:55.344082       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:20:55.347933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:20:55.348837       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:20:55.348860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:20:55.352069       1 config.go:192] "Starting service config controller"
	I0603 11:20:55.352126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:20:55.352167       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:20:55.352173       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:20:55.352862       1 config.go:319] "Starting node config controller"
	I0603 11:20:55.352876       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:20:55.452872       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:20:55.453054       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:20:55.453341       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7528ad5d6204] <==
	E0603 11:20:37.087638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 11:20:37.156300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 11:20:37.157003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 11:20:37.177355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 11:20:37.177555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 11:20:37.274652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 11:20:37.274725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 11:20:37.296854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 11:20:37.297238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 11:20:37.331319       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 11:20:37.332098       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 11:20:37.333148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 11:20:37.333240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 11:20:37.411636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 11:20:37.412093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 11:20:37.478645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:20:37.479003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:20:37.523000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:20:37.523429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 11:20:39.882110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:29:29.412996       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="459e6e9b-fa56-4d66-be58-a624e0a86a56" pod="default/busybox-fc5497c4f-bz4xm" assumedNode="ha-528700-m03" currentNode="ha-528700-m02"
	E0603 11:29:29.442730       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bz4xm\": pod busybox-fc5497c4f-bz4xm is already assigned to node \"ha-528700-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bz4xm" node="ha-528700-m02"
	E0603 11:29:29.443386       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 459e6e9b-fa56-4d66-be58-a624e0a86a56(default/busybox-fc5497c4f-bz4xm) was assumed on ha-528700-m02 but assigned to ha-528700-m03" pod="default/busybox-fc5497c4f-bz4xm"
	E0603 11:29:29.443614       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bz4xm\": pod busybox-fc5497c4f-bz4xm is already assigned to node \"ha-528700-m03\"" pod="default/busybox-fc5497c4f-bz4xm"
	I0603 11:29:29.443828       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bz4xm" node="ha-528700-m03"
	
	
	==> kubelet <==
	Jun 03 11:25:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:26:40 ha-528700 kubelet[2228]: E0603 11:26:40.390006    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:26:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:26:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:26:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:26:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:27:40 ha-528700 kubelet[2228]: E0603 11:27:40.393825    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:27:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:27:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:27:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:27:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:28:40 ha-528700 kubelet[2228]: E0603 11:28:40.390662    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:28:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:28:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:28:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:28:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:29:29 ha-528700 kubelet[2228]: I0603 11:29:29.517929    2228 topology_manager.go:215] "Topology Admit Handler" podUID="69c398cd-c2db-468b-80fa-8f8acff921fe" podNamespace="default" podName="busybox-fc5497c4f-np7rl"
	Jun 03 11:29:29 ha-528700 kubelet[2228]: W0603 11:29:29.526288    2228 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-528700" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-528700' and this object
	Jun 03 11:29:29 ha-528700 kubelet[2228]: E0603 11:29:29.528233    2228 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-528700" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-528700' and this object
	Jun 03 11:29:29 ha-528700 kubelet[2228]: I0603 11:29:29.707793    2228 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrnt\" (UniqueName: \"kubernetes.io/projected/69c398cd-c2db-468b-80fa-8f8acff921fe-kube-api-access-txrnt\") pod \"busybox-fc5497c4f-np7rl\" (UID: \"69c398cd-c2db-468b-80fa-8f8acff921fe\") " pod="default/busybox-fc5497c4f-np7rl"
	Jun 03 11:29:40 ha-528700 kubelet[2228]: E0603 11:29:40.394638    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:29:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:29:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:29:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:29:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:30:27.609818    7828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-528700 -n ha-528700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-528700 -n ha-528700: (12.4623285s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-528700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (68.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (44.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Non-zero exit: out/minikube-windows-amd64.exe profile list --output json: exit status 1 (9.5280886s)

                                                
                                                
** stderr ** 
	W0603 04:47:24.765460    3364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
ha_test.go:392: failed to list profiles with json format. args "out/minikube-windows-amd64.exe profile list --output json": exit status 1
ha_test.go:398: failed to decode json from profile list: args "out/minikube-windows-amd64.exe profile list --output json": unexpected end of JSON input
ha_test.go:411: expected the json of 'profile list' to include "ha-528700" but got *""*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-528700 -n ha-528700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-528700 -n ha-528700: (11.9855434s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 logs -n 25: (8.5476119s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:41 PDT | 03 Jun 24 04:41 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:41 PDT | 03 Jun 24 04:41 PDT |
	|         | ha-528700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:41 PDT | 03 Jun 24 04:42 PDT |
	|         | ha-528700:/home/docker/cp-test_ha-528700-m03_ha-528700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:42 PDT | 03 Jun 24 04:42 PDT |
	|         | ha-528700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n ha-528700 sudo cat                                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:42 PDT | 03 Jun 24 04:42 PDT |
	|         | /home/docker/cp-test_ha-528700-m03_ha-528700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:42 PDT | 03 Jun 24 04:42 PDT |
	|         | ha-528700-m02:/home/docker/cp-test_ha-528700-m03_ha-528700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:42 PDT | 03 Jun 24 04:42 PDT |
	|         | ha-528700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n ha-528700-m02 sudo cat                                                                                   | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:42 PDT | 03 Jun 24 04:43 PDT |
	|         | /home/docker/cp-test_ha-528700-m03_ha-528700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:43 PDT | 03 Jun 24 04:43 PDT |
	|         | ha-528700-m04:/home/docker/cp-test_ha-528700-m03_ha-528700-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:43 PDT | 03 Jun 24 04:43 PDT |
	|         | ha-528700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n ha-528700-m04 sudo cat                                                                                   | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:43 PDT | 03 Jun 24 04:43 PDT |
	|         | /home/docker/cp-test_ha-528700-m03_ha-528700-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-528700 cp testdata\cp-test.txt                                                                                         | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:43 PDT | 03 Jun 24 04:43 PDT |
	|         | ha-528700-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:43 PDT | 03 Jun 24 04:44 PDT |
	|         | ha-528700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:44 PDT | 03 Jun 24 04:44 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:44 PDT | 03 Jun 24 04:44 PDT |
	|         | ha-528700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:44 PDT | 03 Jun 24 04:44 PDT |
	|         | ha-528700:/home/docker/cp-test_ha-528700-m04_ha-528700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:44 PDT | 03 Jun 24 04:44 PDT |
	|         | ha-528700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n ha-528700 sudo cat                                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:44 PDT | 03 Jun 24 04:45 PDT |
	|         | /home/docker/cp-test_ha-528700-m04_ha-528700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:45 PDT | 03 Jun 24 04:45 PDT |
	|         | ha-528700-m02:/home/docker/cp-test_ha-528700-m04_ha-528700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:45 PDT | 03 Jun 24 04:45 PDT |
	|         | ha-528700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n ha-528700-m02 sudo cat                                                                                   | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:45 PDT | 03 Jun 24 04:45 PDT |
	|         | /home/docker/cp-test_ha-528700-m04_ha-528700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt                                                                       | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:45 PDT | 03 Jun 24 04:45 PDT |
	|         | ha-528700-m03:/home/docker/cp-test_ha-528700-m04_ha-528700-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n                                                                                                          | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:45 PDT | 03 Jun 24 04:46 PDT |
	|         | ha-528700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-528700 ssh -n ha-528700-m03 sudo cat                                                                                   | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:46 PDT | 03 Jun 24 04:46 PDT |
	|         | /home/docker/cp-test_ha-528700-m04_ha-528700-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-528700 node stop m02 -v=7                                                                                              | ha-528700 | minikube1\jenkins | v1.33.1 | 03 Jun 24 04:46 PDT | 03 Jun 24 04:46 PDT |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 04:17:34
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 04:17:34.279474    1052 out.go:291] Setting OutFile to fd 1144 ...
	I0603 04:17:34.280499    1052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:17:34.280499    1052 out.go:304] Setting ErrFile to fd 784...
	I0603 04:17:34.280499    1052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:17:34.308277    1052 out.go:298] Setting JSON to false
	I0603 04:17:34.311960    1052 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2682,"bootTime":1717410772,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 04:17:34.311960    1052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 04:17:34.318093    1052 out.go:177] * [ha-528700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 04:17:34.324284    1052 notify.go:220] Checking for updates...
	I0603 04:17:34.326128    1052 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:17:34.332141    1052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 04:17:34.335271    1052 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 04:17:34.337703    1052 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 04:17:34.343188    1052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 04:17:34.346027    1052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 04:17:39.641140    1052 out.go:177] * Using the hyperv driver based on user configuration
	I0603 04:17:39.645057    1052 start.go:297] selected driver: hyperv
	I0603 04:17:39.645057    1052 start.go:901] validating driver "hyperv" against <nil>
	I0603 04:17:39.645057    1052 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 04:17:39.692201    1052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 04:17:39.693219    1052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:17:39.693219    1052 cni.go:84] Creating CNI manager for ""
	I0603 04:17:39.693219    1052 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 04:17:39.693219    1052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 04:17:39.693752    1052 start.go:340] cluster config:
	{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:17:39.693752    1052 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 04:17:39.701225    1052 out.go:177] * Starting "ha-528700" primary control-plane node in "ha-528700" cluster
	I0603 04:17:39.703176    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:17:39.703176    1052 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 04:17:39.703176    1052 cache.go:56] Caching tarball of preloaded images
	I0603 04:17:39.704168    1052 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:17:39.704475    1052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:17:39.704761    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:17:39.705302    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json: {Name:mk56a0c30d28b92a4751ddb457875919745f5dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:17:39.705535    1052 start.go:360] acquireMachinesLock for ha-528700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:17:39.705535    1052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-528700"
	I0603 04:17:39.706850    1052 start.go:93] Provisioning new machine with config: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:17:39.706850    1052 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 04:17:39.711269    1052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 04:17:39.711540    1052 start.go:159] libmachine.API.Create for "ha-528700" (driver="hyperv")
	I0603 04:17:39.711638    1052 client.go:168] LocalClient.Create starting
	I0603 04:17:39.712344    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 04:17:39.712586    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:17:39.712633    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:17:39.712735    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 04:17:39.712735    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:17:39.712735    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:17:39.712735    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 04:17:41.754872    1052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 04:17:41.755561    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:41.755561    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 04:17:43.486306    1052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 04:17:43.486475    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:43.486852    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:17:44.911108    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:17:44.911475    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:44.911544    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:17:48.455979    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:17:48.456068    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:48.458437    1052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 04:17:48.956813    1052 main.go:141] libmachine: Creating SSH key...
	I0603 04:17:49.117768    1052 main.go:141] libmachine: Creating VM...
	I0603 04:17:49.117768    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:17:51.875543    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:17:51.875543    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:51.875543    1052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 04:17:51.876721    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:17:53.567279    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:17:53.567279    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:53.567279    1052 main.go:141] libmachine: Creating VHD
	I0603 04:17:53.568295    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 04:17:57.362817    1052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DA3D68A4-FBFF-4E35-82A3-2AFCCFA39303
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 04:17:57.362967    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:17:57.362967    1052 main.go:141] libmachine: Writing magic tar header
	I0603 04:17:57.363078    1052 main.go:141] libmachine: Writing SSH key tar header
	I0603 04:17:57.371881    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 04:18:00.559256    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:00.559256    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:00.559256    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\disk.vhd' -SizeBytes 20000MB
	I0603 04:18:03.073429    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:03.073629    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:03.073711    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-528700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 04:18:06.704518    1052 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-528700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 04:18:06.704518    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:06.705311    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-528700 -DynamicMemoryEnabled $false
	I0603 04:18:08.938623    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:08.938623    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:08.938850    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-528700 -Count 2
	I0603 04:18:11.092245    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:11.092245    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:11.092531    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-528700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\boot2docker.iso'
	I0603 04:18:13.706923    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:13.706923    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:13.706923    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-528700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\disk.vhd'
	I0603 04:18:16.334630    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:16.334630    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:16.334630    1052 main.go:141] libmachine: Starting VM...
	I0603 04:18:16.335439    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-528700
	I0603 04:18:19.454197    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:19.454341    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:19.454401    1052 main.go:141] libmachine: Waiting for host to start...
	I0603 04:18:19.454401    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:21.710823    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:21.710823    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:21.711756    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:24.221825    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:24.222240    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:25.223961    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:27.453594    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:27.453594    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:27.453594    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:30.083436    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:30.083436    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:31.085402    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:33.287679    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:33.287679    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:33.287679    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:35.770370    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:35.770532    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:36.772864    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:38.995492    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:38.996600    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:38.996600    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:41.588229    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:18:41.588229    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:42.596227    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:44.816798    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:44.817463    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:44.817463    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:47.383986    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:18:47.383986    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:47.384246    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:49.468121    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:49.468290    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:49.468290    1052 machine.go:94] provisionDockerMachine start ...
	I0603 04:18:49.468290    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:51.615213    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:51.615213    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:51.615495    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:54.172364    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:18:54.172632    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:54.178089    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:18:54.187911    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:18:54.187911    1052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:18:54.311846    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 04:18:54.311846    1052 buildroot.go:166] provisioning hostname "ha-528700"
	I0603 04:18:54.312533    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:18:56.473026    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:18:56.473026    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:56.473026    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:18:59.007476    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:18:59.008220    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:18:59.013463    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:18:59.014176    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:18:59.014176    1052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-528700 && echo "ha-528700" | sudo tee /etc/hostname
	I0603 04:18:59.172941    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-528700
	
	I0603 04:18:59.172941    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:01.276589    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:01.276589    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:01.276771    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:03.785750    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:03.785951    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:03.794590    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:03.794590    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:03.794590    1052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-528700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-528700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-528700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:19:03.933453    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:19:03.933628    1052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:19:03.933649    1052 buildroot.go:174] setting up certificates
	I0603 04:19:03.933709    1052 provision.go:84] configureAuth start
	I0603 04:19:03.933746    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:06.043155    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:06.043881    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:06.043952    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:08.597494    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:08.598009    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:08.598009    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:10.706029    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:10.706029    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:10.707016    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:13.223196    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:13.223958    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:13.223958    1052 provision.go:143] copyHostCerts
	I0603 04:19:13.224247    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:19:13.224247    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:19:13.224247    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:19:13.225106    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:19:13.226474    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:19:13.226758    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:19:13.226830    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:19:13.227242    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:19:13.228524    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:19:13.228524    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:19:13.228524    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:19:13.229290    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:19:13.230067    1052 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-528700 san=[127.0.0.1 172.17.88.175 ha-528700 localhost minikube]
	I0603 04:19:13.392366    1052 provision.go:177] copyRemoteCerts
	I0603 04:19:13.403353    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:19:13.403353    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:15.529787    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:15.530739    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:15.530771    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:18.068892    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:18.069979    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:18.069979    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:19:18.178496    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7750258s)
	I0603 04:19:18.178600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:19:18.178749    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:19:18.225301    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:19:18.225881    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 04:19:18.266316    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:19:18.266887    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:19:18.314022    1052 provision.go:87] duration metric: took 14.3802829s to configureAuth
	I0603 04:19:18.314022    1052 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:19:18.314022    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:19:18.314832    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:20.408679    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:20.408784    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:20.408784    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:22.943317    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:22.943317    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:22.948193    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:22.948889    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:22.948889    1052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:19:23.090330    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:19:23.090406    1052 buildroot.go:70] root file system type: tmpfs
	I0603 04:19:23.090670    1052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:19:23.090764    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:25.233000    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:25.233224    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:25.233224    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:27.771083    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:27.771083    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:27.777116    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:27.777116    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:27.777116    1052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:19:27.942412    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:19:27.942652    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:30.045101    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:30.045101    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:30.045192    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:32.572044    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:32.572927    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:32.577401    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:32.577620    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:32.577620    1052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:19:34.687072    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 04:19:34.687156    1052 machine.go:97] duration metric: took 45.2187707s to provisionDockerMachine
	I0603 04:19:34.687187    1052 client.go:171] duration metric: took 1m54.9752507s to LocalClient.Create
	I0603 04:19:34.687226    1052 start.go:167] duration metric: took 1m54.9754452s to libmachine.API.Create "ha-528700"
	I0603 04:19:34.687226    1052 start.go:293] postStartSetup for "ha-528700" (driver="hyperv")
	I0603 04:19:34.687276    1052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:19:34.701301    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:19:34.701301    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:36.796284    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:36.796653    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:36.796653    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:39.274846    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:39.275628    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:39.275628    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:19:39.379802    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6784917s)
	I0603 04:19:39.390396    1052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:19:39.397223    1052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:19:39.397308    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:19:39.397677    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:19:39.398241    1052 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:19:39.398241    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:19:39.410833    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 04:19:39.428484    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:19:39.474103    1052 start.go:296] duration metric: took 4.7868161s for postStartSetup
	I0603 04:19:39.476083    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:41.551797    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:41.551797    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:41.551797    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:44.066415    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:44.066415    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:44.066812    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:19:44.069713    1052 start.go:128] duration metric: took 2m4.3624882s to createHost
	I0603 04:19:44.069811    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:46.141675    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:46.141675    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:46.142131    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:48.785078    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:48.785285    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:48.790988    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:48.791178    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:48.791178    1052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:19:48.926441    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717413588.932657281
	
	I0603 04:19:48.926538    1052 fix.go:216] guest clock: 1717413588.932657281
	I0603 04:19:48.926538    1052 fix.go:229] Guest: 2024-06-03 04:19:48.932657281 -0700 PDT Remote: 2024-06-03 04:19:44.0697138 -0700 PDT m=+129.875455801 (delta=4.862943481s)
	I0603 04:19:48.926538    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:50.999890    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:50.999890    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:50.999890    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:53.469419    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:53.469601    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:53.475370    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:19:53.475520    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.88.175 22 <nil> <nil>}
	I0603 04:19:53.475520    1052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717413588
	I0603 04:19:53.616830    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:19:48 UTC 2024
	
	I0603 04:19:53.616888    1052 fix.go:236] clock set: Mon Jun  3 11:19:48 UTC 2024
	 (err=<nil>)
	I0603 04:19:53.616888    1052 start.go:83] releasing machines lock for "ha-528700", held for 2m13.9100203s
	I0603 04:19:53.617233    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:55.697877    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:19:55.697877    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:55.698034    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:19:58.239049    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:19:58.239278    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:19:58.244542    1052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:19:58.244618    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:19:58.255849    1052 ssh_runner.go:195] Run: cat /version.json
	I0603 04:19:58.255849    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:00.443259    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:00.443294    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:00.443390    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:20:00.446608    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:00.446608    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:00.447138    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:20:03.045962    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:20:03.045962    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:03.046534    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:20:03.068618    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:20:03.069143    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:03.069346    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:20:03.147600    1052 ssh_runner.go:235] Completed: cat /version.json: (4.8917413s)
	I0603 04:20:03.159582    1052 ssh_runner.go:195] Run: systemctl --version
	I0603 04:20:03.227940    1052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9832665s)
	I0603 04:20:03.242351    1052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 04:20:03.251060    1052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:20:03.261700    1052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:20:03.288840    1052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 04:20:03.288840    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:20:03.289014    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:20:03.333188    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:20:03.364444    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:20:03.386309    1052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:20:03.396698    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:20:03.428257    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:20:03.461065    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:20:03.491085    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:20:03.521004    1052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:20:03.552125    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:20:03.581343    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:20:03.612969    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:20:03.643585    1052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:20:03.672804    1052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:20:03.700795    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:03.905976    1052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:20:03.936400    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:20:03.949218    1052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:20:03.985022    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:20:04.020448    1052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:20:04.071731    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:20:04.108009    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:20:04.143469    1052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 04:20:04.207257    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:20:04.234836    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:20:04.281169    1052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:20:04.296750    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:20:04.315070    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:20:04.355641    1052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:20:04.538873    1052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:20:04.731474    1052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:20:04.731528    1052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:20:04.775348    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:04.976155    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:20:07.481092    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5049321s)
	I0603 04:20:07.493884    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:20:07.528430    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:20:07.562450    1052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:20:07.744712    1052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:20:07.921187    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:08.111414    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:20:08.155596    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:20:08.188947    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:08.381839    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:20:08.495946    1052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:20:08.510245    1052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:20:08.520576    1052 start.go:562] Will wait 60s for crictl version
	I0603 04:20:08.533217    1052 ssh_runner.go:195] Run: which crictl
	I0603 04:20:08.550794    1052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:20:08.602284    1052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:20:08.610545    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:20:08.650495    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:20:08.683263    1052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:20:08.683780    1052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:20:08.687898    1052 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:20:08.691391    1052 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:20:08.691391    1052 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:20:08.702174    1052 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:20:08.708360    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:20:08.747538    1052 kubeadm.go:877] updating cluster {Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 04:20:08.747538    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:20:08.757528    1052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 04:20:08.784796    1052 docker.go:685] Got preloaded images: 
	I0603 04:20:08.784796    1052 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 04:20:08.795780    1052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 04:20:08.823763    1052 ssh_runner.go:195] Run: which lz4
	I0603 04:20:08.829463    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 04:20:08.841700    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 04:20:08.847855    1052 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 04:20:08.847855    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 04:20:10.761557    1052 docker.go:649] duration metric: took 1.9316157s to copy over tarball
	I0603 04:20:10.774161    1052 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 04:20:19.330577    1052 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5563985s)
	I0603 04:20:19.330577    1052 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 04:20:19.394097    1052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 04:20:19.415538    1052 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 04:20:19.459955    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:19.657766    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:20:22.621842    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9640699s)
	I0603 04:20:22.632574    1052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 04:20:22.657731    1052 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 04:20:22.657731    1052 cache_images.go:84] Images are preloaded, skipping loading
	I0603 04:20:22.657731    1052 kubeadm.go:928] updating node { 172.17.88.175 8443 v1.30.1 docker true true} ...
	I0603 04:20:22.657731    1052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-528700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.88.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:20:22.665577    1052 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 04:20:22.697453    1052 cni.go:84] Creating CNI manager for ""
	I0603 04:20:22.697453    1052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 04:20:22.697453    1052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 04:20:22.697453    1052 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.88.175 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-528700 NodeName:ha-528700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.88.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.88.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 04:20:22.697977    1052 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.88.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-528700"
	  kubeletExtraArgs:
	    node-ip: 172.17.88.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.88.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 04:20:22.698095    1052 kube-vip.go:115] generating kube-vip config ...
	I0603 04:20:22.709008    1052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 04:20:22.744297    1052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 04:20:22.744297    1052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 04:20:22.756009    1052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:20:22.771505    1052 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 04:20:22.784716    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 04:20:22.802485    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0603 04:20:22.831253    1052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:20:22.859871    1052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0603 04:20:22.889888    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0603 04:20:22.932230    1052 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0603 04:20:22.939359    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:20:22.971841    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:20:23.158580    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:20:23.188521    1052 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700 for IP: 172.17.88.175
	I0603 04:20:23.188521    1052 certs.go:194] generating shared ca certs ...
	I0603 04:20:23.188521    1052 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.189476    1052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:20:23.189476    1052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:20:23.190194    1052 certs.go:256] generating profile certs ...
	I0603 04:20:23.190984    1052 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key
	I0603 04:20:23.190984    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.crt with IP's: []
	I0603 04:20:23.270593    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.crt ...
	I0603 04:20:23.270593    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.crt: {Name:mk26f6668f30a24f17487b3468c5967d94a7b23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.272674    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key ...
	I0603 04:20:23.272674    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key: {Name:mk99d1965e4aa7cd3f8387d67207dbf318ee3dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.274634    1052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705
	I0603 04:20:23.274932    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.88.175 172.17.95.254]
	I0603 04:20:23.472931    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705 ...
	I0603 04:20:23.472931    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705: {Name:mke45570e1156208409a537001364befd204b3a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.474569    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705 ...
	I0603 04:20:23.474569    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705: {Name:mkb59daca4be328d47fbfa517734e651ff3daf7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.475342    1052 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.c634f705 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt
	I0603 04:20:23.487805    1052 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.c634f705 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key
	I0603 04:20:23.489612    1052 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key
	I0603 04:20:23.490207    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt with IP's: []
	I0603 04:20:23.773112    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt ...
	I0603 04:20:23.773112    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt: {Name:mk890eea760a932863e8b60d5a4125a5a0573734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.775051    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key ...
	I0603 04:20:23.775051    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key: {Name:mkf824e09a768b2cc3bd2d9fc3ba5d6dbdb038a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:23.776093    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:20:23.776828    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:20:23.776828    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:20:23.776828    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:20:23.777463    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:20:23.777708    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:20:23.777818    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:20:23.787672    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:20:23.788638    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:20:23.789004    1052 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:20:23.789269    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:20:23.789383    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:20:23.789811    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:20:23.789999    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:20:23.790651    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:20:23.790979    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:20:23.791120    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:20:23.791120    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:23.791773    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:20:23.842816    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:20:23.888945    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:20:23.948373    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:20:23.997908    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 04:20:24.067558    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 04:20:24.103484    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:20:24.140695    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:20:24.185804    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:20:24.228262    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:20:24.272802    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:20:24.318227    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 04:20:24.362026    1052 ssh_runner.go:195] Run: openssl version
	I0603 04:20:24.384389    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:20:24.418076    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:20:24.423990    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:20:24.436026    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:20:24.458151    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:20:24.491862    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:20:24.524181    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:20:24.531626    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:20:24.543318    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:20:24.562712    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:20:24.594792    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:20:24.627181    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:24.634276    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:24.645382    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:20:24.666552    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:20:24.695492    1052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:20:24.702220    1052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 04:20:24.702747    1052 kubeadm.go:391] StartCluster: {Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:20:24.712656    1052 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 04:20:24.746385    1052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 04:20:24.778605    1052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 04:20:24.807395    1052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 04:20:24.832555    1052 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 04:20:24.832555    1052 kubeadm.go:156] found existing configuration files:
	
	I0603 04:20:24.844251    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 04:20:24.869327    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 04:20:24.881456    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 04:20:24.913518    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 04:20:24.932561    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 04:20:24.946431    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 04:20:24.981090    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 04:20:25.003717    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 04:20:25.015576    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 04:20:25.046071    1052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 04:20:25.064594    1052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 04:20:25.076990    1052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 04:20:25.101745    1052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 04:20:25.593249    1052 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 04:20:40.843144    1052 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 04:20:40.843338    1052 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 04:20:40.843567    1052 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 04:20:40.843757    1052 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 04:20:40.844059    1052 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 04:20:40.844268    1052 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 04:20:40.851623    1052 out.go:204]   - Generating certificates and keys ...
	I0603 04:20:40.851623    1052 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 04:20:40.851623    1052 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 04:20:40.852308    1052 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 04:20:40.852354    1052 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 04:20:40.853098    1052 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-528700 localhost] and IPs [172.17.88.175 127.0.0.1 ::1]
	I0603 04:20:40.853098    1052 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 04:20:40.853098    1052 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-528700 localhost] and IPs [172.17.88.175 127.0.0.1 ::1]
	I0603 04:20:40.853731    1052 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 04:20:40.853731    1052 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 04:20:40.853731    1052 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 04:20:40.853731    1052 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 04:20:40.854345    1052 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 04:20:40.854523    1052 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 04:20:40.854653    1052 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 04:20:40.854744    1052 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 04:20:40.854947    1052 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 04:20:40.855170    1052 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 04:20:40.855389    1052 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 04:20:40.858034    1052 out.go:204]   - Booting up control plane ...
	I0603 04:20:40.858262    1052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 04:20:40.858450    1052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 04:20:40.858680    1052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 04:20:40.858877    1052 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 04:20:40.859035    1052 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 04:20:40.859172    1052 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 04:20:40.859172    1052 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 04:20:40.859172    1052 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 04:20:40.859731    1052 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002233583s
	I0603 04:20:40.859925    1052 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 04:20:40.860008    1052 kubeadm.go:309] [api-check] The API server is healthy after 8.793195013s
	I0603 04:20:40.860008    1052 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 04:20:40.860578    1052 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 04:20:40.860848    1052 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 04:20:40.861344    1052 kubeadm.go:309] [mark-control-plane] Marking the node ha-528700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 04:20:40.861459    1052 kubeadm.go:309] [bootstrap-token] Using token: 4zfnhz.pxe484xavk1amvz9
	I0603 04:20:40.864555    1052 out.go:204]   - Configuring RBAC rules ...
	I0603 04:20:40.864555    1052 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 04:20:40.865301    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 04:20:40.865721    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 04:20:40.865835    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 04:20:40.865835    1052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 04:20:40.865835    1052 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 04:20:40.866530    1052 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 04:20:40.866530    1052 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 04:20:40.866805    1052 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 04:20:40.866805    1052 kubeadm.go:309] 
	I0603 04:20:40.866805    1052 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 04:20:40.866805    1052 kubeadm.go:309] 
	I0603 04:20:40.866805    1052 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 04:20:40.866805    1052 kubeadm.go:309] 
	I0603 04:20:40.867390    1052 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 04:20:40.867566    1052 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 04:20:40.867566    1052 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 04:20:40.867566    1052 kubeadm.go:309] 
	I0603 04:20:40.867866    1052 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 04:20:40.867866    1052 kubeadm.go:309] 
	I0603 04:20:40.867986    1052 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 04:20:40.867986    1052 kubeadm.go:309] 
	I0603 04:20:40.868145    1052 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 04:20:40.868145    1052 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 04:20:40.868145    1052 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 04:20:40.868145    1052 kubeadm.go:309] 
	I0603 04:20:40.868145    1052 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 04:20:40.868813    1052 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 04:20:40.868813    1052 kubeadm.go:309] 
	I0603 04:20:40.868813    1052 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zfnhz.pxe484xavk1amvz9 \
	I0603 04:20:40.868813    1052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 \
	I0603 04:20:40.870411    1052 kubeadm.go:309] 	--control-plane 
	I0603 04:20:40.870442    1052 kubeadm.go:309] 
	I0603 04:20:40.870608    1052 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 04:20:40.870608    1052 kubeadm.go:309] 
	I0603 04:20:40.870646    1052 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zfnhz.pxe484xavk1amvz9 \
	I0603 04:20:40.871054    1052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 04:20:40.871207    1052 cni.go:84] Creating CNI manager for ""
	I0603 04:20:40.871234    1052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 04:20:40.874456    1052 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 04:20:40.888114    1052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 04:20:40.896789    1052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 04:20:40.896789    1052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 04:20:40.945803    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 04:20:41.548297    1052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 04:20:41.562830    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:41.562830    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-528700 minikube.k8s.io/updated_at=2024_06_03T04_20_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-528700 minikube.k8s.io/primary=true
	I0603 04:20:41.575916    1052 ops.go:34] apiserver oom_adj: -16
	I0603 04:20:41.760200    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:42.264027    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:42.764087    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:43.265886    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:43.765929    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:44.267711    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:44.769589    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:45.274121    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:45.764624    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:46.266962    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:46.769697    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:47.262470    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:47.760475    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:48.263396    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:48.764931    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:49.271031    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:49.760310    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:50.263598    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:50.772868    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:51.260213    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:51.774569    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:52.274128    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:52.765484    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:53.271527    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 04:20:53.413092    1052 kubeadm.go:1107] duration metric: took 11.8647703s to wait for elevateKubeSystemPrivileges
	W0603 04:20:53.413211    1052 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 04:20:53.413317    1052 kubeadm.go:393] duration metric: took 28.710404s to StartCluster
	I0603 04:20:53.413317    1052 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:53.413552    1052 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:20:53.415362    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:20:53.416675    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 04:20:53.416675    1052 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:20:53.416779    1052 start.go:240] waiting for startup goroutines ...
	I0603 04:20:53.416779    1052 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 04:20:53.416938    1052 addons.go:69] Setting storage-provisioner=true in profile "ha-528700"
	I0603 04:20:53.416938    1052 addons.go:69] Setting default-storageclass=true in profile "ha-528700"
	I0603 04:20:53.416998    1052 addons.go:234] Setting addon storage-provisioner=true in "ha-528700"
	I0603 04:20:53.417037    1052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-528700"
	I0603 04:20:53.417120    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:20:53.417237    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:20:53.417856    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:53.418365    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:53.608903    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 04:20:53.974253    1052 start.go:946] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0603 04:20:55.744047    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:55.744228    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:55.744228    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:55.744228    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:55.747986    1052 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 04:20:55.745050    1052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:20:55.750154    1052 kapi.go:59] client config for ha-528700: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 04:20:55.750938    1052 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 04:20:55.750938    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 04:20:55.751102    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:55.752207    1052 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 04:20:55.752207    1052 addons.go:234] Setting addon default-storageclass=true in "ha-528700"
	I0603 04:20:55.752737    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:20:55.753915    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:20:58.094478    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:58.094535    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:58.094567    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:20:58.244902    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:20:58.245739    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:20:58.245816    1052 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 04:20:58.245816    1052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 04:20:58.245816    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:21:00.542797    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:00.542797    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:00.543867    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:21:00.895492    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:21:00.895492    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:00.895492    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:21:01.031099    1052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 04:21:03.312217    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:21:03.312388    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:03.312615    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:21:03.458838    1052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 04:21:03.650430    1052 round_trippers.go:463] GET https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 04:21:03.650430    1052 round_trippers.go:469] Request Headers:
	I0603 04:21:03.650430    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:21:03.650430    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:21:03.664725    1052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 04:21:03.665936    1052 round_trippers.go:463] PUT https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 04:21:03.665936    1052 round_trippers.go:469] Request Headers:
	I0603 04:21:03.665936    1052 round_trippers.go:473]     Content-Type: application/json
	I0603 04:21:03.665936    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:21:03.665936    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:21:03.668565    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:21:03.672613    1052 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 04:21:03.676602    1052 addons.go:510] duration metric: took 10.2598013s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 04:21:03.676602    1052 start.go:245] waiting for cluster config update ...
	I0603 04:21:03.676602    1052 start.go:254] writing updated cluster config ...
	I0603 04:21:03.679565    1052 out.go:177] 
	I0603 04:21:03.691600    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:21:03.691600    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:21:03.698575    1052 out.go:177] * Starting "ha-528700-m02" control-plane node in "ha-528700" cluster
	I0603 04:21:03.700610    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:21:03.700610    1052 cache.go:56] Caching tarball of preloaded images
	I0603 04:21:03.701571    1052 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:21:03.701571    1052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:21:03.701571    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:21:03.704568    1052 start.go:360] acquireMachinesLock for ha-528700-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:21:03.704568    1052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-528700-m02"
	I0603 04:21:03.704568    1052 start.go:93] Provisioning new machine with config: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:21:03.704568    1052 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0603 04:21:03.708561    1052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 04:21:03.708561    1052 start.go:159] libmachine.API.Create for "ha-528700" (driver="hyperv")
	I0603 04:21:03.708561    1052 client.go:168] LocalClient.Create starting
	I0603 04:21:03.708561    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:21:03.709560    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 04:21:05.722266    1052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 04:21:05.722710    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:05.722710    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 04:21:07.503929    1052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 04:21:07.504620    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:07.504620    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:21:08.996575    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:21:08.996575    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:08.996575    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:21:12.764588    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:21:12.764886    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:12.768485    1052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 04:21:13.276095    1052 main.go:141] libmachine: Creating SSH key...
	I0603 04:21:13.449041    1052 main.go:141] libmachine: Creating VM...
	I0603 04:21:13.449041    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:21:16.397677    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:21:16.397677    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:16.398553    1052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 04:21:16.398738    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:21:18.183564    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:21:18.183701    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:18.183701    1052 main.go:141] libmachine: Creating VHD
	I0603 04:21:18.183701    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 04:21:22.009862    1052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1162F8CB-005F-460A-BFAA-B3F8A25F2E8A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 04:21:22.010726    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:22.010726    1052 main.go:141] libmachine: Writing magic tar header
	I0603 04:21:22.010726    1052 main.go:141] libmachine: Writing SSH key tar header
	I0603 04:21:22.020863    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 04:21:25.208511    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:25.208896    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:25.208950    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\disk.vhd' -SizeBytes 20000MB
	I0603 04:21:27.803631    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:27.804228    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:27.804228    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-528700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 04:21:31.450856    1052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-528700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 04:21:31.450856    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:31.451417    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-528700-m02 -DynamicMemoryEnabled $false
	I0603 04:21:33.696630    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:33.696630    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:33.697631    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-528700-m02 -Count 2
	I0603 04:21:35.876949    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:35.878150    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:35.878150    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-528700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\boot2docker.iso'
	I0603 04:21:38.473611    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:38.474555    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:38.474817    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-528700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\disk.vhd'
	I0603 04:21:41.148156    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:41.148533    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:41.148533    1052 main.go:141] libmachine: Starting VM...
	I0603 04:21:41.148533    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-528700-m02
	I0603 04:21:44.245171    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:44.245318    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:44.245318    1052 main.go:141] libmachine: Waiting for host to start...
	I0603 04:21:44.245318    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:21:46.569829    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:46.570516    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:46.570516    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:21:49.117156    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:49.117156    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:50.129203    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:21:52.370523    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:52.371347    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:52.371347    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:21:54.941455    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:21:54.941455    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:55.954188    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:21:58.285621    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:21:58.285621    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:21:58.285621    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:00.822515    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:22:00.822515    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:01.831514    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:04.082153    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:04.083051    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:04.083149    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:06.655011    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:22:06.655011    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:07.669479    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:09.931311    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:09.932201    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:09.932201    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:12.538757    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:12.538757    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:12.539127    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:14.713008    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:14.713008    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:14.713151    1052 machine.go:94] provisionDockerMachine start ...
	I0603 04:22:14.713215    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:16.917779    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:16.917779    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:16.917779    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:19.509033    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:19.509407    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:19.515272    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:19.526105    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:19.526105    1052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:22:19.656578    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 04:22:19.656690    1052 buildroot.go:166] provisioning hostname "ha-528700-m02"
	I0603 04:22:19.656690    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:21.759586    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:21.760296    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:21.760296    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:24.319535    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:24.319535    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:24.324108    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:24.325113    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:24.325113    1052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-528700-m02 && echo "ha-528700-m02" | sudo tee /etc/hostname
	I0603 04:22:24.484271    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-528700-m02
	
	I0603 04:22:24.484271    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:26.652414    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:26.652414    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:26.652414    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:29.183689    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:29.183689    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:29.190393    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:29.190393    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:29.190920    1052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-528700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-528700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-528700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:22:29.340169    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:22:29.340169    1052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:22:29.340169    1052 buildroot.go:174] setting up certificates
	I0603 04:22:29.340169    1052 provision.go:84] configureAuth start
	I0603 04:22:29.340169    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:31.458611    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:31.458611    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:31.459708    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:34.031745    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:34.032233    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:34.032284    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:36.179903    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:36.179903    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:36.179903    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:38.700067    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:38.700067    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:38.700067    1052 provision.go:143] copyHostCerts
	I0603 04:22:38.700714    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:22:38.700766    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:22:38.700766    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:22:38.701513    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:22:38.702287    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:22:38.702932    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:22:38.702932    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:22:38.702932    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:22:38.704493    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:22:38.705036    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:22:38.705138    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:22:38.705355    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:22:38.706197    1052 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-528700-m02 san=[127.0.0.1 172.17.84.187 ha-528700-m02 localhost minikube]
	I0603 04:22:38.829534    1052 provision.go:177] copyRemoteCerts
	I0603 04:22:38.843505    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:22:38.843505    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:40.994944    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:40.994944    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:40.994944    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:43.575641    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:43.575641    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:43.575641    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:22:43.682390    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8388743s)
	I0603 04:22:43.682390    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:22:43.683420    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:22:43.733638    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:22:43.733638    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 04:22:43.783124    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:22:43.783436    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:22:43.829357    1052 provision.go:87] duration metric: took 14.489156s to configureAuth
	I0603 04:22:43.829357    1052 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:22:43.830175    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:22:43.830384    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:45.950821    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:45.950821    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:45.950923    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:48.506933    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:48.506933    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:48.516645    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:48.516645    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:48.516645    1052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:22:48.650635    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:22:48.650635    1052 buildroot.go:70] root file system type: tmpfs
	I0603 04:22:48.650635    1052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:22:48.650635    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:50.906336    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:50.906336    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:50.907076    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:53.547647    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:53.547647    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:53.553609    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:53.554186    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:53.554186    1052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.88.175"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:22:53.709095    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.88.175
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:22:53.709177    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:22:55.834332    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:22:55.834332    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:55.834332    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:22:58.416589    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:22:58.416716    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:22:58.421156    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:22:58.421822    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:22:58.421898    1052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:23:00.536633    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 04:23:00.536736    1052 machine.go:97] duration metric: took 45.8234333s to provisionDockerMachine
	I0603 04:23:00.536736    1052 client.go:171] duration metric: took 1m56.82792s to LocalClient.Create
	I0603 04:23:00.536785    1052 start.go:167] duration metric: took 1m56.82792s to libmachine.API.Create "ha-528700"
	I0603 04:23:00.536785    1052 start.go:293] postStartSetup for "ha-528700-m02" (driver="hyperv")
	I0603 04:23:00.536785    1052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:23:00.549647    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:23:00.549647    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:02.688564    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:02.688786    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:02.688786    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:05.242778    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:05.243758    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:05.243878    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:23:05.351258    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8015124s)
	I0603 04:23:05.363866    1052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:23:05.371523    1052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:23:05.371670    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:23:05.372104    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:23:05.373199    1052 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:23:05.373277    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:23:05.385605    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 04:23:05.405236    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:23:05.458059    1052 start.go:296] duration metric: took 4.9212631s for postStartSetup
	I0603 04:23:05.460752    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:07.662155    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:07.662155    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:07.662239    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:10.248343    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:10.248638    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:10.248856    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:23:10.251285    1052 start.go:128] duration metric: took 2m6.5452242s to createHost
	I0603 04:23:10.251285    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:12.432213    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:12.432213    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:12.432478    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:15.006943    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:15.007135    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:15.012460    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:23:15.012988    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:23:15.012988    1052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:23:15.156552    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717413795.158364662
	
	I0603 04:23:15.156552    1052 fix.go:216] guest clock: 1717413795.158364662
	I0603 04:23:15.156552    1052 fix.go:229] Guest: 2024-06-03 04:23:15.158364662 -0700 PDT Remote: 2024-06-03 04:23:10.2512854 -0700 PDT m=+336.056584301 (delta=4.907079262s)
	I0603 04:23:15.156685    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:17.333275    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:17.333703    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:17.333703    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:19.867341    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:19.867341    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:19.873377    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:23:19.873924    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.84.187 22 <nil> <nil>}
	I0603 04:23:19.873991    1052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717413795
	I0603 04:23:20.016547    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:23:15 UTC 2024
	
	I0603 04:23:20.016547    1052 fix.go:236] clock set: Mon Jun  3 11:23:15 UTC 2024
	 (err=<nil>)
	I0603 04:23:20.016547    1052 start.go:83] releasing machines lock for "ha-528700-m02", held for 2m16.3116814s
	I0603 04:23:20.016547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:22.199602    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:22.199602    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:22.199602    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:24.788585    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:24.788585    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:24.791728    1052 out.go:177] * Found network options:
	I0603 04:23:24.795154    1052 out.go:177]   - NO_PROXY=172.17.88.175
	W0603 04:23:24.797580    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:23:24.799220    1052 out.go:177]   - NO_PROXY=172.17.88.175
	W0603 04:23:24.801828    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:23:24.803582    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:23:24.805999    1052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:23:24.805999    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:24.815037    1052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 04:23:24.815037    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:23:27.038996    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:27.038996    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:27.039082    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:27.071434    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:27.071936    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:27.072005    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:29.712378    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:29.712378    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:29.712709    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:23:29.740126    1052 main.go:141] libmachine: [stdout =====>] : 172.17.84.187
	
	I0603 04:23:29.740126    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:29.740126    1052 sshutil.go:53] new ssh client: &{IP:172.17.84.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m02\id_rsa Username:docker}
	I0603 04:23:29.814344    1052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9992962s)
	W0603 04:23:29.814344    1052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:23:29.827181    1052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:23:29.905751    1052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 04:23:29.905751    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:23:29.905751    1052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0997407s)
	I0603 04:23:29.905751    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:23:29.956726    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:23:29.988815    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:23:30.013885    1052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:23:30.026153    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:23:30.060446    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:23:30.092896    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:23:30.126480    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:23:30.158496    1052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:23:30.190313    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:23:30.224287    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:23:30.257590    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:23:30.289268    1052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:23:30.319205    1052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:23:30.350788    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:30.539554    1052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:23:30.571926    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:23:30.583707    1052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:23:30.621024    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:23:30.653504    1052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:23:30.696536    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:23:30.733899    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:23:30.772146    1052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 04:23:30.833773    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:23:30.862091    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:23:30.908631    1052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:23:30.928820    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:23:30.948161    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:23:30.994484    1052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:23:31.190604    1052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:23:31.375884    1052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:23:31.375884    1052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:23:31.423000    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:31.619370    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:23:34.132804    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5133434s)
	I0603 04:23:34.144327    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:23:34.179600    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:23:34.213277    1052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:23:34.407633    1052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:23:34.612074    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:34.801650    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:23:34.840818    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:23:34.876154    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:35.063807    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:23:35.164501    1052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:23:35.176848    1052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:23:35.188170    1052 start.go:562] Will wait 60s for crictl version
	I0603 04:23:35.199333    1052 ssh_runner.go:195] Run: which crictl
	I0603 04:23:35.221406    1052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:23:35.278813    1052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:23:35.288496    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:23:35.330584    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:23:35.371338    1052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:23:35.374913    1052 out.go:177]   - env NO_PROXY=172.17.88.175
	I0603 04:23:35.378507    1052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:23:35.382539    1052 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:23:35.384440    1052 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:23:35.384440    1052 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:23:35.398131    1052 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:23:35.402833    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:23:35.424663    1052 mustload.go:65] Loading cluster: ha-528700
	I0603 04:23:35.425417    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:23:35.425625    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:23:37.546041    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:37.546154    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:37.546154    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:23:37.546897    1052 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700 for IP: 172.17.84.187
	I0603 04:23:37.546970    1052 certs.go:194] generating shared ca certs ...
	I0603 04:23:37.546970    1052 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:23:37.547582    1052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:23:37.547985    1052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:23:37.548172    1052 certs.go:256] generating profile certs ...
	I0603 04:23:37.548865    1052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key
	I0603 04:23:37.548987    1052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff
	I0603 04:23:37.549130    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.88.175 172.17.84.187 172.17.95.254]
	I0603 04:23:37.753770    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff ...
	I0603 04:23:37.753770    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff: {Name:mk7956f77c939d9937df83e7fa7d3795b88314ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:23:37.755436    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff ...
	I0603 04:23:37.755436    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff: {Name:mk1c2e06615cac10354428838aeefade4c6ae3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:23:37.756609    1052 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.6d76b5ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt
	I0603 04:23:37.770630    1052 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.6d76b5ff -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key
	I0603 04:23:37.772249    1052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key
	I0603 04:23:37.772313    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:23:37.772550    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:23:37.772600    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:23:37.773307    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:23:37.773462    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:23:37.774023    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:23:37.774023    1052 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:23:37.774023    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:23:37.774739    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:23:37.775048    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:23:37.775312    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:23:37.775312    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:23:37.775849    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:23:37.775994    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:37.776193    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:23:37.776404    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:23:39.926458    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:39.926746    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:39.926746    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:42.561361    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:23:42.561541    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:42.561541    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:23:42.656467    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 04:23:42.664385    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 04:23:42.695582    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 04:23:42.702073    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 04:23:42.733607    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 04:23:42.740981    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 04:23:42.773024    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 04:23:42.780044    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 04:23:42.810423    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 04:23:42.818076    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 04:23:42.847598    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 04:23:42.853543    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 04:23:42.873995    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:23:42.922924    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:23:42.973724    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:23:43.036412    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:23:43.083611    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 04:23:43.128079    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 04:23:43.170362    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:23:43.221562    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:23:43.267447    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:23:43.314607    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:23:43.362209    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:23:43.407048    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 04:23:43.437230    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 04:23:43.466682    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 04:23:43.497357    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 04:23:43.533208    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 04:23:43.570076    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 04:23:43.601918    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 04:23:43.646926    1052 ssh_runner.go:195] Run: openssl version
	I0603 04:23:43.664982    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:23:43.694410    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:23:43.701338    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:23:43.712144    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:23:43.731615    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:23:43.762929    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:23:43.794574    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:43.800379    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:43.811710    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:23:43.832694    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:23:43.862751    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:23:43.897551    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:23:43.904553    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:23:43.916107    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:23:43.936655    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:23:43.968390    1052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:23:43.974986    1052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 04:23:43.975292    1052 kubeadm.go:928] updating node {m02 172.17.84.187 8443 v1.30.1 docker true true} ...
	I0603 04:23:43.975407    1052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-528700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.84.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:23:43.975544    1052 kube-vip.go:115] generating kube-vip config ...
	I0603 04:23:43.987344    1052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 04:23:44.015963    1052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 04:23:44.016061    1052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 04:23:44.028838    1052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:23:44.042814    1052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 04:23:44.056793    1052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 04:23:44.079089    1052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0603 04:23:44.079229    1052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0603 04:23:44.079229    1052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0603 04:23:45.069699    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:23:45.080414    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:23:45.091145    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 04:23:45.091145    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 04:23:46.470744    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:23:46.482388    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:23:46.490349    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 04:23:46.490349    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 04:23:48.058566    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:23:48.082889    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:23:48.095834    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:23:48.101881    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 04:23:48.102041    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 04:23:48.816308    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 04:23:48.834421    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 04:23:48.865713    1052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:23:48.899076    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0603 04:23:48.948293    1052 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0603 04:23:48.955053    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:23:48.991542    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:23:49.213854    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:23:49.242043    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:23:49.243172    1052 start.go:316] joinCluster: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:23:49.243172    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 04:23:49.243172    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:23:51.390078    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:23:51.390078    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:51.390647    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:23:53.955907    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:23:53.956183    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:23:53.956354    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:23:54.156917    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9137341s)
	I0603 04:23:54.156917    1052 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:23:54.156917    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token miu2l8.dnnfyajibxax5wet --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m02 --control-plane --apiserver-advertise-address=172.17.84.187 --apiserver-bind-port=8443"
	I0603 04:24:37.482415    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token miu2l8.dnnfyajibxax5wet --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m02 --control-plane --apiserver-advertise-address=172.17.84.187 --apiserver-bind-port=8443": (43.3254022s)
	I0603 04:24:37.482630    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 04:24:38.401424    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-528700-m02 minikube.k8s.io/updated_at=2024_06_03T04_24_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-528700 minikube.k8s.io/primary=false
	I0603 04:24:38.609334    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-528700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 04:24:38.777326    1052 start.go:318] duration metric: took 49.5340442s to joinCluster
	I0603 04:24:38.777440    1052 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:24:38.780243    1052 out.go:177] * Verifying Kubernetes components...
	I0603 04:24:38.777669    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:24:38.795995    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:24:39.169463    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:24:39.203436    1052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:24:39.204433    1052 kapi.go:59] client config for ha-528700: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 04:24:39.204433    1052 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.88.175:8443
	I0603 04:24:39.205457    1052 node_ready.go:35] waiting up to 6m0s for node "ha-528700-m02" to be "Ready" ...
	I0603 04:24:39.205457    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:39.205457    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:39.205457    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:39.205457    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:39.223584    1052 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0603 04:24:39.712098    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:39.712159    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:39.712159    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:39.712159    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:39.894767    1052 round_trippers.go:574] Response Status: 200 OK in 182 milliseconds
	I0603 04:24:40.218307    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:40.218366    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:40.218366    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:40.218366    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:40.243779    1052 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0603 04:24:40.712507    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:40.712507    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:40.712567    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:40.712567    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:40.719565    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:41.206348    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:41.206559    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:41.206559    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:41.206559    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:41.212401    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:41.213841    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:41.712430    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:41.712527    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:41.712621    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:41.712621    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:41.718764    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:42.219688    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:42.219779    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:42.219779    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:42.219779    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:42.296032    1052 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I0603 04:24:42.710488    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:42.710545    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:42.710545    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:42.710545    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:42.715772    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:43.211379    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:43.211548    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:43.211548    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:43.211548    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:43.217231    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:43.217533    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:43.706729    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:43.706791    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:43.706858    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:43.706858    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:43.739456    1052 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0603 04:24:44.212149    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:44.212349    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:44.212349    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:44.212349    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:44.216933    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:44.719741    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:44.720017    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:44.720017    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:44.720105    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:44.729354    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:45.211264    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:45.211462    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:45.211462    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:45.211462    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:45.218568    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:45.219816    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:45.719803    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:45.720112    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:45.720112    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:45.720112    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:45.724843    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:46.210192    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:46.210192    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:46.210192    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:46.210192    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:46.216314    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:46.718524    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:46.718524    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:46.718591    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:46.718591    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:46.724983    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:47.207739    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:47.207898    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:47.207898    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:47.207953    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:47.212291    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:47.713852    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:47.713852    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:47.713967    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:47.713967    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:47.852991    1052 round_trippers.go:574] Response Status: 200 OK in 139 milliseconds
	I0603 04:24:47.853972    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:48.210290    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:48.210558    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:48.210558    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:48.210558    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:48.255749    1052 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0603 04:24:48.714975    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:48.715050    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:48.715050    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:48.715050    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:48.720401    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:49.219838    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:49.219903    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:49.219903    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:49.219903    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:49.225087    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:49.709076    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:49.709076    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:49.709076    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:49.709387    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:49.713855    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.210703    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:50.210703    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.210778    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.210778    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.217052    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:50.218098    1052 node_ready.go:53] node "ha-528700-m02" has status "Ready":"False"
	I0603 04:24:50.711613    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:50.711745    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.711745    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.711745    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.716075    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.717742    1052 node_ready.go:49] node "ha-528700-m02" has status "Ready":"True"
	I0603 04:24:50.717841    1052 node_ready.go:38] duration metric: took 11.5122594s for node "ha-528700-m02" to be "Ready" ...
	I0603 04:24:50.717841    1052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:24:50.717970    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:50.717970    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.717970    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.717970    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.725710    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:50.735090    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.735090    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f6tv8
	I0603 04:24:50.735090    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.735090    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.735090    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.739834    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.740525    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:50.740525    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.740525    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.740525    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.744847    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.745203    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:50.745890    1052 pod_ready.go:81] duration metric: took 10.7999ms for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.745890    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.745890    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qwkq9
	I0603 04:24:50.745890    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.746040    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.746040    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.748979    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:24:50.750212    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:50.750212    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.750212    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.750270    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.753063    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:24:50.753929    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:50.753929    1052 pod_ready.go:81] duration metric: took 8.0385ms for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.753929    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.753929    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700
	I0603 04:24:50.753929    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.753929    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.753929    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.759125    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:50.759125    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:50.759125    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.759125    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.759125    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.764165    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:50.764312    1052 pod_ready.go:92] pod "etcd-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:50.764312    1052 pod_ready.go:81] duration metric: took 10.3831ms for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.764312    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:50.764938    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:50.764938    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.764938    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.764938    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.769017    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:50.769622    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:50.769622    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:50.769622    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:50.769622    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:50.773194    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:51.271082    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:51.271082    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.271082    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.271082    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.276564    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:51.278160    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:51.278816    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.278816    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.279031    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.288074    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:51.770059    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:51.770289    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.770289    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.770289    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.775665    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:51.776601    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:51.776663    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:51.776663    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:51.776663    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:51.781326    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:52.276725    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:52.276725    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.276725    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.276725    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.281415    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:52.283136    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:52.283136    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.283206    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.283206    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.287252    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:52.777560    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:52.777560    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.777647    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.777647    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.783175    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:52.784827    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:52.784827    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:52.784827    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:52.784827    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:52.789435    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:52.790185    1052 pod_ready.go:102] pod "etcd-ha-528700-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 04:24:53.276153    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:53.276153    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.276153    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.276153    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.281836    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:53.282998    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:53.282998    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.282998    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.282998    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.287601    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:53.777593    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:53.777843    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.777843    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.777843    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.782899    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:53.784561    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:53.784561    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:53.784561    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:53.784561    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:53.787944    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:54.265258    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:54.265258    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.265258    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.265258    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.270039    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:54.271747    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:54.271747    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.271747    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.271850    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.276122    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:54.769547    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:54.769547    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.769547    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.769547    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.777168    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:54.777985    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:54.777985    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:54.777985    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:54.777985    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:54.782170    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:55.279065    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:24:55.279065    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.279135    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.279135    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.286649    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:24:55.287564    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.287564    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.287564    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.287564    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.292299    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:55.293483    1052 pod_ready.go:92] pod "etcd-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.293483    1052 pod_ready.go:81] duration metric: took 4.5291613s for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.293483    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.293483    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700
	I0603 04:24:55.293483    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.293483    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.293483    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.298154    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:55.299819    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:55.299849    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.299849    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.299912    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.303136    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:55.305004    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.305123    1052 pod_ready.go:81] duration metric: took 11.6396ms for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.305123    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.305250    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m02
	I0603 04:24:55.305250    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.305332    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.305332    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.309098    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:24:55.310127    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.310227    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.310227    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.310227    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.312906    1052 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 04:24:55.312906    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.312906    1052 pod_ready.go:81] duration metric: took 7.7826ms for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.312906    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.312906    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700
	I0603 04:24:55.312906    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.312906    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.312906    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.322012    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:55.512106    1052 request.go:629] Waited for 188.3559ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:55.512375    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:55.512375    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.512375    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.512375    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.518168    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:55.519461    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.519573    1052 pod_ready.go:81] duration metric: took 206.6666ms for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.519573    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.714406    1052 request.go:629] Waited for 194.7003ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:24:55.714406    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:24:55.714406    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.714406    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.714406    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.719741    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:55.917254    1052 request.go:629] Waited for 195.8712ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.917254    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:55.917633    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:55.917984    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:55.918481    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:55.928022    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:55.928022    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:55.928970    1052 pod_ready.go:81] duration metric: took 409.3967ms for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:55.929023    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.119396    1052 request.go:629] Waited for 189.9588ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:24:56.119516    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:24:56.119516    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.119516    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.119516    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.125989    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:24:56.322382    1052 request.go:629] Waited for 194.6562ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:56.322584    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:56.322615    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.322615    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.322615    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.327126    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:56.328103    1052 pod_ready.go:92] pod "kube-proxy-dbr56" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:56.328103    1052 pod_ready.go:81] duration metric: took 399.0796ms for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.328103    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.525428    1052 request.go:629] Waited for 196.9841ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:24:56.525428    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:24:56.525678    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.525678    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.525678    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.532173    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:56.712729    1052 request.go:629] Waited for 179.216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:56.712927    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:56.712983    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.712983    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.712983    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.721677    1052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 04:24:56.722675    1052 pod_ready.go:92] pod "kube-proxy-wlzrp" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:56.722675    1052 pod_ready.go:81] duration metric: took 394.5709ms for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.722675    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:56.913145    1052 request.go:629] Waited for 190.2603ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:24:56.913326    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:24:56.913326    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:56.913326    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:56.913326    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:56.919034    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.114427    1052 request.go:629] Waited for 194.0331ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:57.114622    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:24:57.114689    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.114689    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.114689    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.120271    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.121233    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:57.121233    1052 pod_ready.go:81] duration metric: took 398.5576ms for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:57.121335    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:57.315971    1052 request.go:629] Waited for 194.5616ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:24:57.315971    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:24:57.316272    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.316323    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.316323    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.321078    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:24:57.518570    1052 request.go:629] Waited for 196.3351ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:57.518942    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:24:57.518942    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.518942    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.518942    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.524963    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.525494    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:24:57.525494    1052 pod_ready.go:81] duration metric: took 404.1578ms for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:24:57.525494    1052 pod_ready.go:38] duration metric: took 6.8075086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:24:57.525741    1052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 04:24:57.539145    1052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:24:57.565553    1052 api_server.go:72] duration metric: took 18.7878977s to wait for apiserver process to appear ...
	I0603 04:24:57.565553    1052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 04:24:57.565553    1052 api_server.go:253] Checking apiserver healthz at https://172.17.88.175:8443/healthz ...
	I0603 04:24:57.575087    1052 api_server.go:279] https://172.17.88.175:8443/healthz returned 200:
	ok
	I0603 04:24:57.575660    1052 round_trippers.go:463] GET https://172.17.88.175:8443/version
	I0603 04:24:57.575660    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.575660    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.575660    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.576920    1052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:24:57.576920    1052 api_server.go:141] control plane version: v1.30.1
	I0603 04:24:57.576920    1052 api_server.go:131] duration metric: took 11.3668ms to wait for apiserver health ...
	I0603 04:24:57.576920    1052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 04:24:57.721481    1052 request.go:629] Waited for 144.3732ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:57.721573    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:57.721573    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.721573    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.721637    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.731365    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:24:57.739206    1052 system_pods.go:59] 17 kube-system pods found
	I0603 04:24:57.739245    1052 system_pods.go:61] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:24:57.739245    1052 system_pods.go:61] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:24:57.739391    1052 system_pods.go:61] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:24:57.739391    1052 system_pods.go:74] duration metric: took 162.4709ms to wait for pod list to return data ...
	I0603 04:24:57.739391    1052 default_sa.go:34] waiting for default service account to be created ...
	I0603 04:24:57.924223    1052 request.go:629] Waited for 184.4915ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:24:57.924223    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:24:57.924223    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:57.924223    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:57.924223    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:57.929970    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:24:57.930845    1052 default_sa.go:45] found service account: "default"
	I0603 04:24:57.930845    1052 default_sa.go:55] duration metric: took 191.4531ms for default service account to be created ...
	I0603 04:24:57.930845    1052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 04:24:58.125097    1052 request.go:629] Waited for 194.2511ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:58.125300    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:24:58.125300    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:58.125300    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:58.125371    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:58.135992    1052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 04:24:58.145094    1052 system_pods.go:86] 17 kube-system pods found
	I0603 04:24:58.145094    1052 system_pods.go:89] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:24:58.145094    1052 system_pods.go:89] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:24:58.145094    1052 system_pods.go:126] duration metric: took 214.2483ms to wait for k8s-apps to be running ...
	I0603 04:24:58.145094    1052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 04:24:58.154870    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:24:58.191970    1052 system_svc.go:56] duration metric: took 46.8764ms WaitForService to wait for kubelet
	I0603 04:24:58.191970    1052 kubeadm.go:576] duration metric: took 19.4143132s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:24:58.191970    1052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 04:24:58.316976    1052 request.go:629] Waited for 124.8185ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes
	I0603 04:24:58.317205    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes
	I0603 04:24:58.317205    1052 round_trippers.go:469] Request Headers:
	I0603 04:24:58.317205    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:24:58.317205    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:24:58.325544    1052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 04:24:58.327574    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:24:58.327734    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:24:58.327734    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:24:58.327734    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:24:58.327734    1052 node_conditions.go:105] duration metric: took 135.7632ms to run NodePressure ...
	I0603 04:24:58.327734    1052 start.go:240] waiting for startup goroutines ...
	I0603 04:24:58.327734    1052 start.go:254] writing updated cluster config ...
	I0603 04:24:58.331663    1052 out.go:177] 
	I0603 04:24:58.344561    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:24:58.344561    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:24:58.353970    1052 out.go:177] * Starting "ha-528700-m03" control-plane node in "ha-528700" cluster
	I0603 04:24:58.357125    1052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 04:24:58.357125    1052 cache.go:56] Caching tarball of preloaded images
	I0603 04:24:58.357842    1052 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 04:24:58.358182    1052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 04:24:58.358356    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:24:58.359578    1052 start.go:360] acquireMachinesLock for ha-528700-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 04:24:58.360557    1052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-528700-m03"
	I0603 04:24:58.360557    1052 start.go:93] Provisioning new machine with config: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:24:58.360557    1052 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0603 04:24:58.364188    1052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 04:24:58.365294    1052 start.go:159] libmachine.API.Create for "ha-528700" (driver="hyperv")
	I0603 04:24:58.365355    1052 client.go:168] LocalClient.Create starting
	I0603 04:24:58.365627    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 04:24:58.366133    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:24:58.366200    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:24:58.366457    1052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 04:24:58.366629    1052 main.go:141] libmachine: Decoding PEM data...
	I0603 04:24:58.366629    1052 main.go:141] libmachine: Parsing certificate...
	I0603 04:24:58.366629    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 04:25:00.295482    1052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 04:25:00.295482    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:00.295482    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 04:25:02.053302    1052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 04:25:02.053932    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:02.053984    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:25:03.546741    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:25:03.546741    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:03.546829    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:25:07.372584    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:25:07.372584    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:07.374683    1052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 04:25:07.833384    1052 main.go:141] libmachine: Creating SSH key...
	I0603 04:25:08.057341    1052 main.go:141] libmachine: Creating VM...
	I0603 04:25:08.057341    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 04:25:11.021183    1052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 04:25:11.021183    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:11.021447    1052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 04:25:11.021529    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 04:25:12.818695    1052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 04:25:12.819032    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:12.819032    1052 main.go:141] libmachine: Creating VHD
	I0603 04:25:12.819155    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 04:25:16.660654    1052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4773904F-6D49-4129-8E2E-A2E8D56C24E4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 04:25:16.660654    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:16.660912    1052 main.go:141] libmachine: Writing magic tar header
	I0603 04:25:16.660912    1052 main.go:141] libmachine: Writing SSH key tar header
	I0603 04:25:16.671592    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 04:25:19.908955    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:19.908955    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:19.908955    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\disk.vhd' -SizeBytes 20000MB
	I0603 04:25:22.520085    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:22.520085    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:22.520779    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-528700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 04:25:26.314985    1052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-528700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 04:25:26.315306    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:26.315306    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-528700-m03 -DynamicMemoryEnabled $false
	I0603 04:25:28.649817    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:28.650564    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:28.650564    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-528700-m03 -Count 2
	I0603 04:25:30.868612    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:30.868612    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:30.868976    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-528700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\boot2docker.iso'
	I0603 04:25:33.519310    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:33.519396    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:33.519467    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-528700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\disk.vhd'
	I0603 04:25:36.219234    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:36.220156    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:36.220303    1052 main.go:141] libmachine: Starting VM...
	I0603 04:25:36.220374    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-528700-m03
	I0603 04:25:39.351010    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:39.351712    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:39.351774    1052 main.go:141] libmachine: Waiting for host to start...
	I0603 04:25:39.351836    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:41.721033    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:41.721033    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:41.721791    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:25:44.383469    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:44.383469    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:45.392893    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:47.698971    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:47.699283    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:47.699283    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:25:50.302813    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:50.302813    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:51.315684    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:53.688564    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:53.688564    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:53.688564    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:25:56.304146    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:25:56.304146    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:57.308097    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:25:59.590202    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:25:59.590202    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:25:59.590547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:02.172200    1052 main.go:141] libmachine: [stdout =====>] : 
	I0603 04:26:02.172200    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:03.186295    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:05.515620    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:05.515620    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:05.515620    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:08.134860    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:08.134860    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:08.134957    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:10.333035    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:10.333035    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:10.333035    1052 machine.go:94] provisionDockerMachine start ...
	I0603 04:26:10.333761    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:12.559616    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:12.559671    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:12.559671    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:15.191495    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:15.191610    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:15.196285    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:15.208858    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:15.208858    1052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 04:26:15.324340    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 04:26:15.324340    1052 buildroot.go:166] provisioning hostname "ha-528700-m03"
	I0603 04:26:15.324340    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:17.487362    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:17.487362    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:17.487584    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:20.104567    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:20.105291    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:20.111791    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:20.111945    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:20.111945    1052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-528700-m03 && echo "ha-528700-m03" | sudo tee /etc/hostname
	I0603 04:26:20.261026    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-528700-m03
	
	I0603 04:26:20.261142    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:22.433608    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:22.433608    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:22.433916    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:25.067691    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:25.067691    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:25.077562    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:25.077562    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:25.078490    1052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-528700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-528700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-528700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 04:26:25.227854    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 04:26:25.227930    1052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 04:26:25.228006    1052 buildroot.go:174] setting up certificates
	I0603 04:26:25.228006    1052 provision.go:84] configureAuth start
	I0603 04:26:25.228082    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:27.408753    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:27.409023    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:27.409123    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:30.043379    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:30.043598    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:30.043598    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:32.202329    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:32.202394    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:32.202527    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:34.821170    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:34.821170    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:34.821170    1052 provision.go:143] copyHostCerts
	I0603 04:26:34.821170    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 04:26:34.821170    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 04:26:34.821695    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 04:26:34.821772    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 04:26:34.823149    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 04:26:34.823149    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 04:26:34.823149    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 04:26:34.823692    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 04:26:34.825107    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 04:26:34.825107    1052 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 04:26:34.825651    1052 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 04:26:34.825950    1052 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 04:26:34.826941    1052 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-528700-m03 san=[127.0.0.1 172.17.89.50 ha-528700-m03 localhost minikube]
	I0603 04:26:34.983621    1052 provision.go:177] copyRemoteCerts
	I0603 04:26:34.994021    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 04:26:34.994021    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:37.187853    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:37.187853    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:37.187853    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:39.767551    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:39.767551    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:39.767551    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:26:39.880778    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8867458s)
	I0603 04:26:39.880869    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 04:26:39.881092    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 04:26:39.929681    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 04:26:39.929681    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 04:26:39.985873    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 04:26:39.985873    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 04:26:40.032547    1052 provision.go:87] duration metric: took 14.804507s to configureAuth
	I0603 04:26:40.032547    1052 buildroot.go:189] setting minikube options for container-runtime
	I0603 04:26:40.032547    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:26:40.032547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:42.225109    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:42.225456    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:42.225456    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:44.807336    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:44.807336    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:44.812506    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:44.813222    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:44.813222    1052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 04:26:44.937734    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 04:26:44.937886    1052 buildroot.go:70] root file system type: tmpfs
	I0603 04:26:44.938148    1052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 04:26:44.938245    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:47.085858    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:47.086116    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:47.086116    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:49.665857    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:49.666489    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:49.671866    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:49.672587    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:49.672587    1052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.88.175"
	Environment="NO_PROXY=172.17.88.175,172.17.84.187"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 04:26:49.827464    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.88.175
	Environment=NO_PROXY=172.17.88.175,172.17.84.187
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 04:26:49.828005    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:51.995200    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:51.995200    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:51.995200    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:26:54.617676    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:26:54.617676    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:54.623471    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:26:54.623830    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:26:54.623830    1052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 04:26:56.842660    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 04:26:56.842771    1052 machine.go:97] duration metric: took 46.5095178s to provisionDockerMachine
	I0603 04:26:56.842771    1052 client.go:171] duration metric: took 1m58.4771449s to LocalClient.Create
	I0603 04:26:56.842771    1052 start.go:167] duration metric: took 1m58.477206s to libmachine.API.Create "ha-528700"
	I0603 04:26:56.842956    1052 start.go:293] postStartSetup for "ha-528700-m03" (driver="hyperv")
	I0603 04:26:56.843019    1052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 04:26:56.855344    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 04:26:56.855344    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:26:59.008891    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:26:59.009395    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:26:59.009395    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:01.636014    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:01.636014    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:01.636626    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:27:01.749316    1052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8938673s)
	I0603 04:27:01.761289    1052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 04:27:01.767451    1052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 04:27:01.767451    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 04:27:01.768427    1052 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 04:27:01.769425    1052 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 04:27:01.769425    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 04:27:01.780727    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 04:27:01.801161    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 04:27:01.851491    1052 start.go:296] duration metric: took 5.008523s for postStartSetup
	I0603 04:27:01.854438    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:04.001967    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:04.001967    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:04.002547    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:06.641694    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:06.641776    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:06.642042    1052 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\config.json ...
	I0603 04:27:06.644607    1052 start.go:128] duration metric: took 2m8.2837565s to createHost
	I0603 04:27:06.644863    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:08.804466    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:08.804466    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:08.805263    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:11.409745    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:11.410423    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:11.415748    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:27:11.415748    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:27:11.415748    1052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 04:27:11.535658    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717414031.542400255
	
	I0603 04:27:11.535725    1052 fix.go:216] guest clock: 1717414031.542400255
	I0603 04:27:11.535782    1052 fix.go:229] Guest: 2024-06-03 04:27:11.542400255 -0700 PDT Remote: 2024-06-03 04:27:06.6446079 -0700 PDT m=+572.449375301 (delta=4.897792355s)
	I0603 04:27:11.535851    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:13.743131    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:13.743131    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:13.743439    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:16.370649    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:16.370649    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:16.378401    1052 main.go:141] libmachine: Using SSH client type: native
	I0603 04:27:16.379040    1052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.89.50 22 <nil> <nil>}
	I0603 04:27:16.379040    1052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717414031
	I0603 04:27:16.518862    1052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 11:27:11 UTC 2024
	
	I0603 04:27:16.518862    1052 fix.go:236] clock set: Mon Jun  3 11:27:11 UTC 2024
	 (err=<nil>)
	I0603 04:27:16.518970    1052 start.go:83] releasing machines lock for "ha-528700-m03", held for 2m18.1579884s
	I0603 04:27:16.519119    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:18.677116    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:18.677116    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:18.677352    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:21.287359    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:21.287652    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:21.290006    1052 out.go:177] * Found network options:
	I0603 04:27:21.293529    1052 out.go:177]   - NO_PROXY=172.17.88.175,172.17.84.187
	W0603 04:27:21.296389    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.296389    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:27:21.298475    1052 out.go:177]   - NO_PROXY=172.17.88.175,172.17.84.187
	W0603 04:27:21.300784    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.300784    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.302498    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 04:27:21.302498    1052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 04:27:21.305291    1052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 04:27:21.305448    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:21.315861    1052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 04:27:21.316402    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:27:23.566075    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:23.566075    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:23.567123    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:23.568009    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:23.568009    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:23.568541    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:26.318556    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:26.318643    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:26.318801    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:27:26.342545    1052 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:27:26.342545    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:26.342545    1052 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:27:26.502930    1052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1975011s)
	I0603 04:27:26.502930    1052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1863895s)
	W0603 04:27:26.502930    1052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 04:27:26.515893    1052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 04:27:26.547630    1052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 04:27:26.547630    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:27:26.547708    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:27:26.599028    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 04:27:26.630327    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 04:27:26.651502    1052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 04:27:26.668149    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 04:27:26.700216    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:27:26.731813    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 04:27:26.761109    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 04:27:26.793612    1052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 04:27:26.826773    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 04:27:26.858380    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 04:27:26.889858    1052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 04:27:26.923041    1052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 04:27:26.953033    1052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 04:27:26.992400    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:27.189127    1052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 04:27:27.220236    1052 start.go:494] detecting cgroup driver to use...
	I0603 04:27:27.232156    1052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 04:27:27.271943    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:27:27.306217    1052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 04:27:27.357485    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 04:27:27.391398    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:27:27.441707    1052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 04:27:27.504317    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 04:27:27.530077    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 04:27:27.581211    1052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 04:27:27.597685    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 04:27:27.616198    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 04:27:27.657648    1052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 04:27:27.860622    1052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 04:27:28.051541    1052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 04:27:28.051641    1052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 04:27:28.100658    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:28.309198    1052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 04:27:30.837960    1052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5287558s)
	I0603 04:27:30.851317    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 04:27:30.888038    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:27:30.924182    1052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 04:27:31.126388    1052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 04:27:31.336754    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:31.546453    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 04:27:31.589730    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 04:27:31.626258    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:31.835733    1052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 04:27:31.954473    1052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 04:27:31.968963    1052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 04:27:31.977303    1052 start.go:562] Will wait 60s for crictl version
	I0603 04:27:31.989293    1052 ssh_runner.go:195] Run: which crictl
	I0603 04:27:32.006594    1052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 04:27:32.060331    1052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 04:27:32.068818    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:27:32.109869    1052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 04:27:32.141370    1052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 04:27:32.145230    1052 out.go:177]   - env NO_PROXY=172.17.88.175
	I0603 04:27:32.149136    1052 out.go:177]   - env NO_PROXY=172.17.88.175,172.17.84.187
	I0603 04:27:32.151758    1052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 04:27:32.156273    1052 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 04:27:32.159075    1052 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 04:27:32.159075    1052 ip.go:210] interface addr: 172.17.80.1/20
	I0603 04:27:32.171914    1052 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 04:27:32.178503    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:27:32.199798    1052 mustload.go:65] Loading cluster: ha-528700
	I0603 04:27:32.200378    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:27:32.201022    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:27:34.343900    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:34.343900    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:34.343900    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:27:34.344623    1052 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700 for IP: 172.17.89.50
	I0603 04:27:34.344623    1052 certs.go:194] generating shared ca certs ...
	I0603 04:27:34.344623    1052 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:27:34.345193    1052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 04:27:34.345422    1052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 04:27:34.345422    1052 certs.go:256] generating profile certs ...
	I0603 04:27:34.346456    1052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\client.key
	I0603 04:27:34.346635    1052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a
	I0603 04:27:34.346796    1052 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.88.175 172.17.84.187 172.17.89.50 172.17.95.254]
	I0603 04:27:34.527642    1052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a ...
	I0603 04:27:34.527642    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a: {Name:mk98650ae6e1a65b569fcd292aea4237111735de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:27:34.529712    1052 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a ...
	I0603 04:27:34.529712    1052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a: {Name:mk677f98976c65fd93c890594ab73256d0d268dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 04:27:34.530952    1052 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt.8b5c312a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt
	I0603 04:27:34.544971    1052 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key.8b5c312a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key
	I0603 04:27:34.545949    1052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key
	I0603 04:27:34.545949    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 04:27:34.545949    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 04:27:34.546978    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 04:27:34.547869    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 04:27:34.548014    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 04:27:34.548141    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 04:27:34.548801    1052 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 04:27:34.548801    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 04:27:34.548801    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 04:27:34.549802    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 04:27:34.549802    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 04:27:34.549802    1052 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 04:27:34.549802    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 04:27:34.549802    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 04:27:34.550858    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:34.551131    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:27:36.719842    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:36.719969    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:36.720060    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:39.345721    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:27:39.345721    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:39.345721    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:27:39.454907    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 04:27:39.462390    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 04:27:39.498283    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 04:27:39.507554    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 04:27:39.538881    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 04:27:39.547013    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 04:27:39.580183    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 04:27:39.588152    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 04:27:39.622311    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 04:27:39.636346    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 04:27:39.670280    1052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 04:27:39.681040    1052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0603 04:27:39.702197    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 04:27:39.752550    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 04:27:39.798009    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 04:27:39.850446    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 04:27:39.901617    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 04:27:39.955460    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 04:27:40.007890    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 04:27:40.054382    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-528700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 04:27:40.105998    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 04:27:40.149714    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 04:27:40.195183    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 04:27:40.244674    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 04:27:40.276496    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 04:27:40.309274    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 04:27:40.343128    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 04:27:40.373125    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 04:27:40.405644    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0603 04:27:40.436770    1052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 04:27:40.481709    1052 ssh_runner.go:195] Run: openssl version
	I0603 04:27:40.502921    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 04:27:40.535046    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:40.542086    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:40.553425    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 04:27:40.574437    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 04:27:40.608950    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 04:27:40.638375    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 04:27:40.646857    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 04:27:40.661991    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 04:27:40.683841    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 04:27:40.719104    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 04:27:40.753551    1052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 04:27:40.760993    1052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 04:27:40.774040    1052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 04:27:40.794815    1052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 04:27:40.826385    1052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 04:27:40.836543    1052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 04:27:40.836543    1052 kubeadm.go:928] updating node {m03 172.17.89.50 8443 v1.30.1 docker true true} ...
	I0603 04:27:40.836543    1052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-528700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.89.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 04:27:40.837131    1052 kube-vip.go:115] generating kube-vip config ...
	I0603 04:27:40.850165    1052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 04:27:40.877319    1052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 04:27:40.877319    1052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 04:27:40.890218    1052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 04:27:40.912623    1052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 04:27:40.924520    1052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 04:27:40.949194    1052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 04:27:40.949194    1052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 04:27:40.949194    1052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 04:27:40.950062    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:27:40.950129    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:27:40.964147    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:27:40.965842    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 04:27:40.971614    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 04:27:40.990959    1052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:27:40.991100    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 04:27:40.991174    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 04:27:40.991174    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 04:27:40.991174    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 04:27:41.004122    1052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 04:27:41.058883    1052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 04:27:41.060291    1052 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 04:27:42.249625    1052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 04:27:42.270026    1052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0603 04:27:42.312518    1052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 04:27:42.346923    1052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0603 04:27:42.403045    1052 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0603 04:27:42.409800    1052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 04:27:42.443745    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:27:42.651031    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:27:42.681662    1052 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:27:42.682626    1052 start.go:316] joinCluster: &{Name:ha-528700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-528700 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.88.175 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.84.187 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.89.50 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 04:27:42.682897    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 04:27:42.682952    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:27:44.872991    1052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:27:44.873957    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:44.874080    1052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:27:47.502082    1052 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:27:47.502082    1052 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:27:47.502972    1052 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:27:47.715750    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0327335s)
	I0603 04:27:47.715830    1052 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.89.50 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:27:47.715886    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nx0soc.q0j32x6kkd97gdds --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m03 --control-plane --apiserver-advertise-address=172.17.89.50 --apiserver-bind-port=8443"
	I0603 04:28:31.930574    1052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nx0soc.q0j32x6kkd97gdds --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-528700-m03 --control-plane --apiserver-advertise-address=172.17.89.50 --apiserver-bind-port=8443": (44.2145106s)
	I0603 04:28:31.931263    1052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 04:28:32.718364    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-528700-m03 minikube.k8s.io/updated_at=2024_06_03T04_28_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=ha-528700 minikube.k8s.io/primary=false
	I0603 04:28:32.897094    1052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-528700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 04:28:33.117772    1052 start.go:318] duration metric: took 50.4350499s to joinCluster
	I0603 04:28:33.117772    1052 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.89.50 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 04:28:33.121654    1052 out.go:177] * Verifying Kubernetes components...
	I0603 04:28:33.118659    1052 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:28:33.138661    1052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 04:28:33.657250    1052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 04:28:33.690383    1052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:28:33.691112    1052 kapi.go:59] client config for ha-528700: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-528700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 04:28:33.691112    1052 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.88.175:8443
	I0603 04:28:33.692221    1052 node_ready.go:35] waiting up to 6m0s for node "ha-528700-m03" to be "Ready" ...
	I0603 04:28:33.692355    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:33.692355    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:33.692355    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:33.692468    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:33.707016    1052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 04:28:34.193603    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:34.193603    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:34.193603    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:34.193603    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:34.199451    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:34.701281    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:34.701329    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:34.701329    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:34.701360    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:34.705636    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:35.193849    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:35.193964    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:35.194020    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:35.194020    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:35.199336    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:35.698766    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:35.698766    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:35.698766    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:35.698766    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:35.702954    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:35.705141    1052 node_ready.go:53] node "ha-528700-m03" has status "Ready":"False"
	I0603 04:28:36.206400    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:36.206400    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:36.206477    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:36.206477    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:36.211226    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:36.696301    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:36.696375    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:36.696375    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:36.696375    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:36.701826    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:37.205200    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:37.205288    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:37.205288    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:37.205288    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:37.209869    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:37.697807    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:37.698015    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:37.698015    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:37.698015    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:37.702854    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:38.192799    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:38.192799    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:38.192799    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:38.192799    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:38.355035    1052 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0603 04:28:38.356923    1052 node_ready.go:53] node "ha-528700-m03" has status "Ready":"False"
	I0603 04:28:38.707697    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:38.707774    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:38.707774    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:38.707774    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:38.712492    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:39.207412    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:39.207412    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:39.207412    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:39.207412    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:39.212473    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:39.693230    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:39.693280    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:39.693280    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:39.693280    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:39.702332    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:28:40.196586    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:40.196844    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:40.196844    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:40.196844    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:40.203172    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:40.697913    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:40.697913    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:40.697913    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:40.697913    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:40.702347    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:40.703048    1052 node_ready.go:53] node "ha-528700-m03" has status "Ready":"False"
	I0603 04:28:41.200350    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:41.200350    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:41.200643    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:41.200643    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:41.206103    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:41.699558    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:41.699810    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:41.699810    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:41.699810    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:41.704509    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.204235    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:42.204422    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.204422    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.204422    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.209071    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.210288    1052 node_ready.go:49] node "ha-528700-m03" has status "Ready":"True"
	I0603 04:28:42.210349    1052 node_ready.go:38] duration metric: took 8.5180503s for node "ha-528700-m03" to be "Ready" ...
	I0603 04:28:42.210349    1052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:28:42.210492    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:42.210492    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.210492    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.210492    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.228074    1052 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0603 04:28:42.238194    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.238194    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f6tv8
	I0603 04:28:42.238194    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.238194    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.238194    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.243192    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.244493    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:42.244493    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.244493    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.244493    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.248801    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.249724    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.249777    1052 pod_ready.go:81] duration metric: took 11.5834ms for pod "coredns-7db6d8ff4d-f6tv8" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.249777    1052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.249889    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qwkq9
	I0603 04:28:42.249889    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.249889    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.249889    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.255356    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:42.257252    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:42.257252    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.257378    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.257378    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.260634    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.262033    1052 pod_ready.go:92] pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.262033    1052 pod_ready.go:81] duration metric: took 12.2555ms for pod "coredns-7db6d8ff4d-qwkq9" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.262033    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.262033    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700
	I0603 04:28:42.262033    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.262033    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.262033    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.265647    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.267024    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:42.267093    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.267093    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.267093    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.270351    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.271881    1052 pod_ready.go:92] pod "etcd-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.271881    1052 pod_ready.go:81] duration metric: took 9.8481ms for pod "etcd-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.271881    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.271960    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m02
	I0603 04:28:42.272061    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.272061    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.272061    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.275312    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:42.276340    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:42.276398    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.276398    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.276398    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.280661    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:42.282242    1052 pod_ready.go:92] pod "etcd-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:42.282297    1052 pod_ready.go:81] duration metric: took 10.2823ms for pod "etcd-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.282297    1052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:42.407452    1052 request.go:629] Waited for 125.0198ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:42.407634    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:42.407634    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.407634    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.407634    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.414792    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:28:42.610442    1052 request.go:629] Waited for 194.4642ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:42.610792    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:42.610792    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.610792    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.610792    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.617377    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:42.813171    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:42.813171    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:42.813379    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:42.813379    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:42.818428    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:43.015598    1052 request.go:629] Waited for 195.3527ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.015598    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.015598    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.015598    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.015598    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.021328    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:43.296053    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:43.296325    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.296325    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.296325    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.299643    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:43.406394    1052 request.go:629] Waited for 103.4776ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.406726    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.406726    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.406726    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.406890    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.411102    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:43.782655    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:43.783342    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.783342    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.783342    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.796559    1052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 04:28:43.813534    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:43.813788    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:43.813788    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:43.813788    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:43.817968    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:44.287600    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:44.292027    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.292027    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.292027    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.297564    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:44.298199    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:44.298199    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.298744    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.298744    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.303047    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:44.304504    1052 pod_ready.go:102] pod "etcd-ha-528700-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 04:28:44.788824    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:44.788824    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.788824    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.788824    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.793650    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:44.795489    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:44.795489    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:44.795547    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:44.795547    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:44.800326    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.289895    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:45.289951    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.289951    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.289951    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.303749    1052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 04:28:45.305105    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:45.305245    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.305245    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.305245    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.311209    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:45.790660    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/etcd-ha-528700-m03
	I0603 04:28:45.790660    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.790660    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.790660    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.795270    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.797380    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:45.797380    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.797466    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.797466    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.801993    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.802279    1052 pod_ready.go:92] pod "etcd-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:45.802279    1052 pod_ready.go:81] duration metric: took 3.5199738s for pod "etcd-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:45.802279    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:45.802821    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700
	I0603 04:28:45.802821    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.802821    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.802821    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.807122    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:45.808530    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:45.808530    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:45.808530    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:45.808530    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:45.814973    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:45.816334    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:45.816367    1052 pod_ready.go:81] duration metric: took 14.0877ms for pod "kube-apiserver-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:45.816367    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:46.008249    1052 request.go:629] Waited for 191.8209ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m02
	I0603 04:28:46.008586    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m02
	I0603 04:28:46.008586    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.008680    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.008680    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.015023    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:46.212992    1052 request.go:629] Waited for 196.6882ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:46.213125    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:46.213263    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.213263    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.213263    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.218774    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:46.220025    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:46.220025    1052 pod_ready.go:81] duration metric: took 403.657ms for pod "kube-apiserver-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:46.220025    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:46.415440    1052 request.go:629] Waited for 195.1327ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.415524    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.415524    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.415524    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.415601    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.421948    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:46.618959    1052 request.go:629] Waited for 195.8944ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:46.618959    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:46.618959    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.618959    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.618959    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.624705    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:46.806263    1052 request.go:629] Waited for 77.7386ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.806563    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-528700-m03
	I0603 04:28:46.806563    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:46.806563    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:46.806563    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:46.813002    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:47.009971    1052 request.go:629] Waited for 195.7123ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:47.010314    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:47.010314    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.010314    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.010314    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.014906    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:47.015724    1052 pod_ready.go:92] pod "kube-apiserver-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:47.015724    1052 pod_ready.go:81] duration metric: took 795.6979ms for pod "kube-apiserver-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.015724    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.212350    1052 request.go:629] Waited for 196.461ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700
	I0603 04:28:47.212427    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700
	I0603 04:28:47.212560    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.212560    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.212560    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.217705    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:47.414407    1052 request.go:629] Waited for 195.2934ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:47.414407    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:47.414407    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.414407    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.414407    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.421885    1052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 04:28:47.423518    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:47.423518    1052 pod_ready.go:81] duration metric: took 407.7931ms for pod "kube-controller-manager-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.423586    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.617277    1052 request.go:629] Waited for 193.6227ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:28:47.617587    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m02
	I0603 04:28:47.617587    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.617587    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.617587    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.622760    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:47.805527    1052 request.go:629] Waited for 180.9864ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:47.805746    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:47.805807    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:47.805807    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:47.805807    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:47.810432    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:47.811844    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:47.811896    1052 pod_ready.go:81] duration metric: took 388.3091ms for pod "kube-controller-manager-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:47.811896    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.009334    1052 request.go:629] Waited for 197.3039ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m03
	I0603 04:28:48.009522    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-528700-m03
	I0603 04:28:48.009522    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.009522    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.009640    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.014853    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:48.213114    1052 request.go:629] Waited for 196.4653ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:48.213313    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:48.213313    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.213313    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.213313    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.217681    1052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 04:28:48.218748    1052 pod_ready.go:92] pod "kube-controller-manager-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:48.218748    1052 pod_ready.go:81] duration metric: took 406.8513ms for pod "kube-controller-manager-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.218824    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.417855    1052 request.go:629] Waited for 198.5257ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:28:48.418377    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbr56
	I0603 04:28:48.418377    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.418377    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.418377    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.423207    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:48.604863    1052 request.go:629] Waited for 180.5953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:48.604932    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:48.605035    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.605035    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.605035    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.611880    1052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 04:28:48.612834    1052 pod_ready.go:92] pod "kube-proxy-dbr56" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:48.612834    1052 pod_ready.go:81] duration metric: took 393.9487ms for pod "kube-proxy-dbr56" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.612834    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fggr6" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:48.809657    1052 request.go:629] Waited for 196.4992ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fggr6
	I0603 04:28:48.809891    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fggr6
	I0603 04:28:48.809891    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:48.809994    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:48.810038    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:48.815897    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:49.013529    1052 request.go:629] Waited for 196.4179ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:49.013529    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:49.013759    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.013759    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.013759    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.017946    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:49.019167    1052 pod_ready.go:92] pod "kube-proxy-fggr6" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:49.019167    1052 pod_ready.go:81] duration metric: took 406.332ms for pod "kube-proxy-fggr6" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.019167    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.216456    1052 request.go:629] Waited for 196.5445ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:28:49.216846    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlzrp
	I0603 04:28:49.216939    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.216939    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.216939    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.221512    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:49.419829    1052 request.go:629] Waited for 197.8562ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:49.419829    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:49.419829    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.419829    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.419829    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.425571    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:49.426919    1052 pod_ready.go:92] pod "kube-proxy-wlzrp" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:49.426919    1052 pod_ready.go:81] duration metric: took 407.7509ms for pod "kube-proxy-wlzrp" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.426990    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.608397    1052 request.go:629] Waited for 180.9961ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:28:49.608577    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700
	I0603 04:28:49.608652    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.608652    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.608652    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.613842    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:49.810719    1052 request.go:629] Waited for 195.9362ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:49.811287    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700
	I0603 04:28:49.811287    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:49.811287    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:49.811287    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:49.815700    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:49.817626    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:49.817626    1052 pod_ready.go:81] duration metric: took 390.6353ms for pod "kube-scheduler-ha-528700" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:49.817626    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.015259    1052 request.go:629] Waited for 197.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:28:50.015776    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m02
	I0603 04:28:50.015776    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.015776    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.015847    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.024914    1052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 04:28:50.219138    1052 request.go:629] Waited for 193.1934ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:50.219443    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m02
	I0603 04:28:50.219443    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.219443    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.219443    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.225129    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:50.227080    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:50.227080    1052 pod_ready.go:81] duration metric: took 409.4527ms for pod "kube-scheduler-ha-528700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.227080    1052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.406002    1052 request.go:629] Waited for 178.9216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m03
	I0603 04:28:50.406358    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-528700-m03
	I0603 04:28:50.406358    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.406358    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.406358    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.411605    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:50.608807    1052 request.go:629] Waited for 195.5637ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:50.608948    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes/ha-528700-m03
	I0603 04:28:50.608948    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.608948    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.608948    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.614449    1052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 04:28:50.616451    1052 pod_ready.go:92] pod "kube-scheduler-ha-528700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 04:28:50.616510    1052 pod_ready.go:81] duration metric: took 389.4294ms for pod "kube-scheduler-ha-528700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 04:28:50.616510    1052 pod_ready.go:38] duration metric: took 8.4061423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 04:28:50.616610    1052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 04:28:50.629236    1052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:28:50.663509    1052 api_server.go:72] duration metric: took 17.5456968s to wait for apiserver process to appear ...
	I0603 04:28:50.663509    1052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 04:28:50.663509    1052 api_server.go:253] Checking apiserver healthz at https://172.17.88.175:8443/healthz ...
	I0603 04:28:50.671322    1052 api_server.go:279] https://172.17.88.175:8443/healthz returned 200:
	ok
	I0603 04:28:50.671464    1052 round_trippers.go:463] GET https://172.17.88.175:8443/version
	I0603 04:28:50.671489    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.671489    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.671489    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.672657    1052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 04:28:50.673289    1052 api_server.go:141] control plane version: v1.30.1
	I0603 04:28:50.673289    1052 api_server.go:131] duration metric: took 9.7801ms to wait for apiserver health ...
	I0603 04:28:50.673289    1052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 04:28:50.812535    1052 request.go:629] Waited for 138.9034ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:50.812725    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:50.812725    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:50.812725    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:50.812725    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:50.823419    1052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 04:28:50.833503    1052 system_pods.go:59] 24 kube-system pods found
	I0603 04:28:50.833503    1052 system_pods.go:61] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "etcd-ha-528700-m03" [9971b938-e085-42f9-83b7-f868d3ac29e3] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kindnet-m9x6v" [77ce9a12-df3d-4bcc-9a1f-dc34158d2c75] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-apiserver-ha-528700-m03" [0498e9ff-f11f-4c0b-bd0a-d2a21b9c37b5] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-controller-manager-ha-528700-m03" [c8a8819a-e8cf-4123-b353-55364fa738c5] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-proxy-fggr6" [13f51aa0-f497-4fed-af63-8358e0a6ee9c] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-scheduler-ha-528700-m03" [59a02823-6fef-44f0-90a1-ff4f87eb9a3b] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "kube-vip-ha-528700-m03" [b7b8c197-df95-441d-a014-21827c9c2fb0] Running
	I0603 04:28:50.833503    1052 system_pods.go:61] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:28:50.833503    1052 system_pods.go:74] duration metric: took 160.2135ms to wait for pod list to return data ...
	I0603 04:28:50.833503    1052 default_sa.go:34] waiting for default service account to be created ...
	I0603 04:28:51.016152    1052 request.go:629] Waited for 182.4034ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:28:51.016240    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/default/serviceaccounts
	I0603 04:28:51.016240    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:51.016240    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:51.016240    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:51.020683    1052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 04:28:51.021442    1052 default_sa.go:45] found service account: "default"
	I0603 04:28:51.021442    1052 default_sa.go:55] duration metric: took 187.9389ms for default service account to be created ...
	I0603 04:28:51.021442    1052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 04:28:51.218196    1052 request.go:629] Waited for 196.7533ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:51.218376    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/namespaces/kube-system/pods
	I0603 04:28:51.218376    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:51.218503    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:51.218604    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:51.228880    1052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 04:28:51.238955    1052 system_pods.go:86] 24 kube-system pods found
	I0603 04:28:51.238955    1052 system_pods.go:89] "coredns-7db6d8ff4d-f6tv8" [3f7b978f-f6a3-4c1d-a254-4a65647dedda] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "coredns-7db6d8ff4d-qwkq9" [36af9702-70db-4347-b07b-a6a41b12b7c6] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "etcd-ha-528700" [ac8887a0-0163-42ba-922e-d5f0b663eea2] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "etcd-ha-528700-m02" [54109a9c-4ba4-465f-9327-c16b5ab5a707] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "etcd-ha-528700-m03" [9971b938-e085-42f9-83b7-f868d3ac29e3] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kindnet-b247z" [0b49b8fa-c461-4108-b10d-431d68087499] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kindnet-g475v" [d88caff2-ef98-4d05-ad90-b0666a3c78cc] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kindnet-m9x6v" [77ce9a12-df3d-4bcc-9a1f-dc34158d2c75] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-apiserver-ha-528700" [1ea6a9fb-edd8-45ac-9d57-87141b2787ad] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-apiserver-ha-528700-m02" [184ddcfe-97d5-4cc3-a81d-51fcf02527c9] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-apiserver-ha-528700-m03" [0498e9ff-f11f-4c0b-bd0a-d2a21b9c37b5] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-controller-manager-ha-528700" [a9d5abe0-eb51-4c52-ba3a-52dfce8972d8] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-controller-manager-ha-528700-m02" [0c0b3e54-a328-451a-8f01-4853247cc111] Running
	I0603 04:28:51.238955    1052 system_pods.go:89] "kube-controller-manager-ha-528700-m03" [c8a8819a-e8cf-4123-b353-55364fa738c5] Running
	I0603 04:28:51.239539    1052 system_pods.go:89] "kube-proxy-dbr56" [0a025682-18bb-4412-b1ea-2d2b04c8e1eb] Running
	I0603 04:28:51.239539    1052 system_pods.go:89] "kube-proxy-fggr6" [13f51aa0-f497-4fed-af63-8358e0a6ee9c] Running
	I0603 04:28:51.239539    1052 system_pods.go:89] "kube-proxy-wlzrp" [29a87f78-498c-4797-94a9-dd0cd822bba1] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-scheduler-ha-528700" [cbfa8ee4-ed56-4eda-8407-f9aea783cab0] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-scheduler-ha-528700-m02" [10790962-efdb-4316-87ea-3e7e6e83b62e] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-scheduler-ha-528700-m03" [59a02823-6fef-44f0-90a1-ff4f87eb9a3b] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-vip-ha-528700" [5f44a8b9-304c-468f-bbe8-e4888643bf7a] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-vip-ha-528700-m02" [ce4e4aae-cb4c-44e9-be29-fffc7a864ade] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "kube-vip-ha-528700-m03" [b7b8c197-df95-441d-a014-21827c9c2fb0] Running
	I0603 04:28:51.239618    1052 system_pods.go:89] "storage-provisioner" [7c7b9977-086b-42d1-8504-b6df231f507d] Running
	I0603 04:28:51.239670    1052 system_pods.go:126] duration metric: took 218.2277ms to wait for k8s-apps to be running ...
	I0603 04:28:51.239699    1052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 04:28:51.251803    1052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:28:51.279143    1052 system_svc.go:56] duration metric: took 39.4199ms WaitForService to wait for kubelet
	I0603 04:28:51.279143    1052 kubeadm.go:576] duration metric: took 18.1613298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 04:28:51.279213    1052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 04:28:51.405106    1052 request.go:629] Waited for 125.3144ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.88.175:8443/api/v1/nodes
	I0603 04:28:51.405273    1052 round_trippers.go:463] GET https://172.17.88.175:8443/api/v1/nodes
	I0603 04:28:51.405273    1052 round_trippers.go:469] Request Headers:
	I0603 04:28:51.405357    1052 round_trippers.go:473]     Accept: application/json, */*
	I0603 04:28:51.405357    1052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 04:28:51.413502    1052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 04:28:51.416648    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:28:51.416780    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:28:51.416780    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:28:51.416780    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:28:51.416780    1052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 04:28:51.416780    1052 node_conditions.go:123] node cpu capacity is 2
	I0603 04:28:51.416873    1052 node_conditions.go:105] duration metric: took 137.6604ms to run NodePressure ...
	I0603 04:28:51.416948    1052 start.go:240] waiting for startup goroutines ...
	I0603 04:28:51.417004    1052 start.go:254] writing updated cluster config ...
	I0603 04:28:51.429825    1052 ssh_runner.go:195] Run: rm -f paused
	I0603 04:28:51.568920    1052 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 04:28:51.573608    1052 out.go:177] * Done! kubectl is now configured to use "ha-528700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119109985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119123785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.119220486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.369799672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.370273475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.370735378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:21:05 ha-528700 dockerd[1334]: time="2024-06-03T11:21:05.371346281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.281822842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.282011741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.282033541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:31 ha-528700 dockerd[1334]: time="2024-06-03T11:29:31.282757740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:31 ha-528700 cri-dockerd[1233]: time="2024-06-03T11:29:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef4916f63c2572e500af0e435ad66fa844055789a54996445d44f5e45da81067/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 11:29:32 ha-528700 cri-dockerd[1233]: time="2024-06-03T11:29:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115184393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115278593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115293993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:29:33 ha-528700 dockerd[1334]: time="2024-06-03T11:29:33.115397794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 11:30:35 ha-528700 dockerd[1328]: 2024/06/03 11:30:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:35 ha-528700 dockerd[1328]: 2024/06/03 11:30:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:36 ha-528700 dockerd[1328]: 2024/06/03 11:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:36 ha-528700 dockerd[1328]: 2024/06/03 11:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:36 ha-528700 dockerd[1328]: 2024/06/03 11:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:36 ha-528700 dockerd[1328]: 2024/06/03 11:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:36 ha-528700 dockerd[1328]: 2024/06/03 11:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 11:30:36 ha-528700 dockerd[1328]: 2024/06/03 11:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8aac137d2078d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   ef4916f63c257       busybox-fc5497c4f-np7rl
	e337c58c541be       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   638df5069b9c2       coredns-7db6d8ff4d-qwkq9
	2a6bf989eb78f       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   8fa51718f47e2       storage-provisioner
	3f2ce3288a437       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   fceba7a162c21       coredns-7db6d8ff4d-f6tv8
	545c59933594b       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   ab6dcc7849e12       kindnet-b247z
	eeac3b42fbc22       747097150317f                                                                                         26 minutes ago      Running             kube-proxy                0                   e5ccb93689142       kube-proxy-dbr56
	3fbe4523644ae       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   1700399e7e214       kube-vip-ha-528700
	ed3e2e6ea4df3       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   4673e27399785       kube-controller-manager-ha-528700
	7dce0e761e834       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   dbc6ba1c0ac40       etcd-ha-528700
	7528ad5d62047       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   326cf3a1b3414       kube-scheduler-ha-528700
	10075ba4eda88       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   4b60d234d135c       kube-apiserver-ha-528700
	
	
	==> coredns [3f2ce3288a43] <==
	[INFO] 10.244.2.2:33305 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011562319s
	[INFO] 10.244.2.2:60267 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001525s
	[INFO] 10.244.2.2:60436 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182401s
	[INFO] 10.244.1.2:45066 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000933s
	[INFO] 10.244.1.2:49898 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001287s
	[INFO] 10.244.1.2:39543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000440901s
	[INFO] 10.244.1.2:37707 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000155701s
	[INFO] 10.244.0.4:57657 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000319001s
	[INFO] 10.244.0.4:54536 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001162s
	[INFO] 10.244.0.4:54212 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.005736709s
	[INFO] 10.244.2.2:54815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202s
	[INFO] 10.244.2.2:53251 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001682s
	[INFO] 10.244.2.2:45061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000186301s
	[INFO] 10.244.1.2:44264 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001937s
	[INFO] 10.244.1.2:33181 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001814s
	[INFO] 10.244.1.2:37345 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001367s
	[INFO] 10.244.0.4:55312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208201s
	[INFO] 10.244.0.4:43313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001737s
	[INFO] 10.244.0.4:57390 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001376s
	[INFO] 10.244.0.4:60067 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002004s
	[INFO] 10.244.2.2:38692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220101s
	[INFO] 10.244.2.2:44288 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000243501s
	[INFO] 10.244.1.2:36361 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001997s
	[INFO] 10.244.1.2:34253 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000877s
	[INFO] 10.244.0.4:48401 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000253101s
	
	
	==> coredns [e337c58c541b] <==
	[INFO] 127.0.0.1:40581 - 14972 "HINFO IN 3959985873406318438.8433902953276015444. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034571418s
	[INFO] 10.244.2.2:59139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.005645009s
	[INFO] 10.244.0.4:47366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000324901s
	[INFO] 10.244.0.4:43790 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001493303s
	[INFO] 10.244.0.4:57180 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.072006613s
	[INFO] 10.244.2.2:53854 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002349s
	[INFO] 10.244.2.2:49891 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001432s
	[INFO] 10.244.1.2:38448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002266s
	[INFO] 10.244.1.2:43391 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284501s
	[INFO] 10.244.1.2:50524 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061s
	[INFO] 10.244.1.2:48059 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001926s
	[INFO] 10.244.0.4:41207 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002291s
	[INFO] 10.244.0.4:52826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013039621s
	[INFO] 10.244.0.4:47414 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000265501s
	[INFO] 10.244.0.4:53717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214201s
	[INFO] 10.244.0.4:37365 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001134s
	[INFO] 10.244.2.2:60828 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001446s
	[INFO] 10.244.1.2:33790 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001435s
	[INFO] 10.244.2.2:44374 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001982s
	[INFO] 10.244.2.2:60223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001125s
	[INFO] 10.244.1.2:47096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001271s
	[INFO] 10.244.1.2:46573 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000180201s
	[INFO] 10.244.0.4:57331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001275s
	[INFO] 10.244.0.4:56864 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000626s
	[INFO] 10.244.0.4:60853 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000757s
	
	
	==> describe nodes <==
	Name:               ha-528700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T04_20_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:20:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:47:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:45:10 +0000   Mon, 03 Jun 2024 11:20:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:45:10 +0000   Mon, 03 Jun 2024 11:20:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:45:10 +0000   Mon, 03 Jun 2024 11:20:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:45:10 +0000   Mon, 03 Jun 2024 11:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.88.175
	  Hostname:    ha-528700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 867fa0169e944c39ab4f9d2356c523db
	  System UUID:                e9e49675-4f1e-4643-9f41-a8c6e6f0faf7
	  Boot ID:                    12b1a7a0-fc13-47d3-9ff0-c7ad1a0dfbf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np7rl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-f6tv8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-qwkq9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-528700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-b247z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-528700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-528700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-dbr56                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-528700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-528700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-528700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-528700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-528700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-528700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-528700 event: Registered Node ha-528700 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-528700 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-528700 event: Registered Node ha-528700 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-528700 event: Registered Node ha-528700 in Controller
	
	
	Name:               ha-528700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T04_24_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:24:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:46:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 11:45:28 +0000   Mon, 03 Jun 2024 11:47:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 11:45:28 +0000   Mon, 03 Jun 2024 11:47:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 11:45:28 +0000   Mon, 03 Jun 2024 11:47:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 11:45:28 +0000   Mon, 03 Jun 2024 11:47:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.84.187
	  Hostname:    ha-528700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6896cf2650f4ab1b2fc4fc4d5a4a779
	  System UUID:                9df023a1-46d6-9d47-90f6-a62a2438553a
	  Boot ID:                    8fed1965-c792-451f-9e6f-cbe02ddb8e94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hd7gx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-528700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-g475v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-528700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-528700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-wlzrp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-528700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-528700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-528700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-528700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-528700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-528700-m02 event: Registered Node ha-528700-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-528700-m02 event: Registered Node ha-528700-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-528700-m02 event: Registered Node ha-528700-m02 in Controller
	  Normal  NodeNotReady             51s                node-controller  Node ha-528700-m02 status is now: NodeNotReady
	
	
	Name:               ha-528700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T04_28_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:28:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:47:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:45:17 +0000   Mon, 03 Jun 2024 11:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:45:17 +0000   Mon, 03 Jun 2024 11:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:45:17 +0000   Mon, 03 Jun 2024 11:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:45:17 +0000   Mon, 03 Jun 2024 11:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.89.50
	  Hostname:    ha-528700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 03139da24ec642c79bb348ceaf512292
	  System UUID:                51f24894-b999-ee44-9796-5032cc45e0e1
	  Boot ID:                    8d86378c-bcfc-4115-8889-05350921e2c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bz4xm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-528700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-m9x6v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-528700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-528700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-fggr6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-528700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-528700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-528700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-528700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-528700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-528700-m03 event: Registered Node ha-528700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-528700-m03 event: Registered Node ha-528700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-528700-m03 event: Registered Node ha-528700-m03 in Controller
	
	
	Name:               ha-528700-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-528700-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=ha-528700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T04_33_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 11:33:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-528700-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 11:47:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 11:44:32 +0000   Mon, 03 Jun 2024 11:33:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 11:44:32 +0000   Mon, 03 Jun 2024 11:33:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 11:44:32 +0000   Mon, 03 Jun 2024 11:33:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 11:44:32 +0000   Mon, 03 Jun 2024 11:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.88.156
	  Hostname:    ha-528700-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30fb6ae3043a4dd8a262d4235ba6062a
	  System UUID:                d1160ea4-85dd-204f-b9c2-50ae4c2dced3
	  Boot ID:                    a9b1a563-c2ec-4f9a-aa17-1bcb0f02431c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-29rxf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-llxv2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-528700-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-528700-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-528700-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-528700-m04 event: Registered Node ha-528700-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-528700-m04 event: Registered Node ha-528700-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-528700-m04 event: Registered Node ha-528700-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-528700-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.004599] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 11:19] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.191658] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Jun 3 11:20] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.101566] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.540796] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.188931] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.229265] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.790155] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.174359] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.187962] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.263943] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[ +11.269834] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.102606] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.407460] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +6.451804] systemd-fstab-generator[1724]: Ignoring "noauto" option for root device
	[  +0.103095] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.695159] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.629666] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[ +14.937078] kauditd_printk_skb: 17 callbacks suppressed
	[Jun 3 11:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.857066] kauditd_printk_skb: 35 callbacks suppressed
	[Jun 3 11:31] hrtimer: interrupt took 3023503 ns
	
	
	==> etcd [7dce0e761e83] <==
	{"level":"warn","ts":"2024-06-03T11:47:54.085329Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7d78f440bc9e3f64","rtt":"14.325662ms","error":"dial tcp 172.17.84.187:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-06-03T11:47:54.162878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.17214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.17722Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.183719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.195316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.20577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.213927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.218631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.223107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.233345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.245145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.252132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.256685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.262646Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.273618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.283667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.28399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.293662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.29897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.304004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.311001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.318709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.327932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T11:47:54.38343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6ddfdcb93034918c","from":"6ddfdcb93034918c","remote-peer-id":"7d78f440bc9e3f64","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:47:54 up 29 min,  0 users,  load average: 0.53, 0.62, 0.54
	Linux ha-528700 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [545c59933594] <==
	I0603 11:47:15.408487       1 main.go:250] Node ha-528700-m04 has CIDR [10.244.3.0/24] 
	I0603 11:47:25.425055       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:47:25.425168       1 main.go:227] handling current node
	I0603 11:47:25.425185       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:47:25.425193       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:47:25.425780       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:47:25.425878       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:47:25.425957       1 main.go:223] Handling node with IPs: map[172.17.88.156:{}]
	I0603 11:47:25.425966       1 main.go:250] Node ha-528700-m04 has CIDR [10.244.3.0/24] 
	I0603 11:47:35.437187       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:47:35.437286       1 main.go:227] handling current node
	I0603 11:47:35.437299       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:47:35.437305       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:47:35.437617       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:47:35.437648       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:47:35.437984       1 main.go:223] Handling node with IPs: map[172.17.88.156:{}]
	I0603 11:47:35.438037       1 main.go:250] Node ha-528700-m04 has CIDR [10.244.3.0/24] 
	I0603 11:47:45.449987       1 main.go:223] Handling node with IPs: map[172.17.88.175:{}]
	I0603 11:47:45.450031       1 main.go:227] handling current node
	I0603 11:47:45.450043       1 main.go:223] Handling node with IPs: map[172.17.84.187:{}]
	I0603 11:47:45.450049       1 main.go:250] Node ha-528700-m02 has CIDR [10.244.1.0/24] 
	I0603 11:47:45.450490       1 main.go:223] Handling node with IPs: map[172.17.89.50:{}]
	I0603 11:47:45.450522       1 main.go:250] Node ha-528700-m03 has CIDR [10.244.2.0/24] 
	I0603 11:47:45.450671       1 main.go:223] Handling node with IPs: map[172.17.88.156:{}]
	I0603 11:47:45.450742       1 main.go:250] Node ha-528700-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [10075ba4eda8] <==
	I0603 11:20:54.086211       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 11:28:18.004907       1 trace.go:236] Trace[1417259662]: "Update" accept:application/json, */*,audit-id:46201383-cfef-47df-94fc-ccd55b7d08a2,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (03-Jun-2024 11:28:17.377) (total time: 601ms):
	Trace[1417259662]: ["GuaranteedUpdate etcd3" audit-id:46201383-cfef-47df-94fc-ccd55b7d08a2,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 601ms (11:28:17.378)
	Trace[1417259662]:  ---"Txn call completed" 600ms (11:28:17.978)]
	Trace[1417259662]: [601.292272ms] [601.292272ms] END
	E0603 11:28:26.916721       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0603 11:28:26.922144       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0603 11:28:26.922245       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0603 11:28:26.964833       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0603 11:28:26.965159       1 timeout.go:142] post-timeout activity - time-elapsed: 83.108399ms, PATCH "/api/v1/namespaces/default/events/ha-528700-m03.17d57b07b630573f" result: <nil>
	E0603 11:29:37.231645       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57569: use of closed network connection
	E0603 11:29:37.702773       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57571: use of closed network connection
	E0603 11:29:39.230641       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57574: use of closed network connection
	E0603 11:29:39.710008       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57576: use of closed network connection
	E0603 11:29:40.160273       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57578: use of closed network connection
	E0603 11:29:40.636776       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57580: use of closed network connection
	E0603 11:29:41.101364       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57582: use of closed network connection
	E0603 11:29:41.551316       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57584: use of closed network connection
	E0603 11:29:41.987023       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57586: use of closed network connection
	E0603 11:29:42.777950       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57589: use of closed network connection
	E0603 11:29:53.204971       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57591: use of closed network connection
	E0603 11:29:53.675767       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57594: use of closed network connection
	E0603 11:30:04.129134       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57596: use of closed network connection
	E0603 11:30:04.552525       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57599: use of closed network connection
	E0603 11:30:14.985146       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:57601: use of closed network connection
	
	
	==> kube-controller-manager [ed3e2e6ea4df] <==
	E0603 11:29:30.057856       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:29:30.135358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.157503ms"
	I0603 11:29:30.140095       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="796.099µs"
	I0603 11:29:30.646316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.1µs"
	I0603 11:29:31.647591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="314.3µs"
	I0603 11:29:31.667403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52µs"
	I0603 11:29:31.682789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="145.6µs"
	I0603 11:29:31.696552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.6µs"
	I0603 11:29:31.711253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.4µs"
	I0603 11:29:31.901681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165µs"
	I0603 11:29:33.246178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.968328ms"
	I0603 11:29:33.246649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="410.901µs"
	I0603 11:29:33.636500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.604361ms"
	I0603 11:29:33.637734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="164.8µs"
	I0603 11:29:33.717803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.6µs"
	I0603 11:29:34.726060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.485023ms"
	I0603 11:29:34.726577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.2µs"
	E0603 11:33:50.410673       1 certificate_controller.go:146] Sync csr-fnp9v failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-fnp9v": the object has been modified; please apply your changes to the latest version and try again
	I0603 11:33:50.502416       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-528700-m04\" does not exist"
	I0603 11:33:50.644903       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-528700-m04" podCIDRs=["10.244.3.0/24"]
	I0603 11:33:53.751538       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-528700-m04"
	I0603 11:34:14.283140       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-528700-m04"
	I0603 11:47:03.782781       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-528700-m04"
	I0603 11:47:03.995779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.631489ms"
	I0603 11:47:03.995879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.1µs"
	
	
	==> kube-proxy [eeac3b42fbc2] <==
	I0603 11:20:55.224744       1 server_linux.go:69] "Using iptables proxy"
	I0603 11:20:55.247372       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.88.175"]
	I0603 11:20:55.343996       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 11:20:55.344060       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 11:20:55.344082       1 server_linux.go:165] "Using iptables Proxier"
	I0603 11:20:55.347933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 11:20:55.348837       1 server.go:872] "Version info" version="v1.30.1"
	I0603 11:20:55.348860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 11:20:55.352069       1 config.go:192] "Starting service config controller"
	I0603 11:20:55.352126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 11:20:55.352167       1 config.go:101] "Starting endpoint slice config controller"
	I0603 11:20:55.352173       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 11:20:55.352862       1 config.go:319] "Starting node config controller"
	I0603 11:20:55.352876       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 11:20:55.452872       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 11:20:55.453054       1 shared_informer.go:320] Caches are synced for service config
	I0603 11:20:55.453341       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7528ad5d6204] <==
	E0603 11:20:37.333240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 11:20:37.411636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 11:20:37.412093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 11:20:37.478645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 11:20:37.479003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 11:20:37.523000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 11:20:37.523429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 11:20:39.882110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 11:29:29.412996       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="459e6e9b-fa56-4d66-be58-a624e0a86a56" pod="default/busybox-fc5497c4f-bz4xm" assumedNode="ha-528700-m03" currentNode="ha-528700-m02"
	E0603 11:29:29.442730       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bz4xm\": pod busybox-fc5497c4f-bz4xm is already assigned to node \"ha-528700-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bz4xm" node="ha-528700-m02"
	E0603 11:29:29.443386       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 459e6e9b-fa56-4d66-be58-a624e0a86a56(default/busybox-fc5497c4f-bz4xm) was assumed on ha-528700-m02 but assigned to ha-528700-m03" pod="default/busybox-fc5497c4f-bz4xm"
	E0603 11:29:29.443614       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bz4xm\": pod busybox-fc5497c4f-bz4xm is already assigned to node \"ha-528700-m03\"" pod="default/busybox-fc5497c4f-bz4xm"
	I0603 11:29:29.443828       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bz4xm" node="ha-528700-m03"
	E0603 11:33:50.644276       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lfftj\": pod kindnet-lfftj is already assigned to node \"ha-528700-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lfftj" node="ha-528700-m04"
	E0603 11:33:50.651662       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 595e2ee7-1890-4642-ad74-de40b53b76be(kube-system/kindnet-lfftj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lfftj"
	E0603 11:33:50.652037       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lfftj\": pod kindnet-lfftj is already assigned to node \"ha-528700-m04\"" pod="kube-system/kindnet-lfftj"
	I0603 11:33:50.652296       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lfftj" node="ha-528700-m04"
	E0603 11:33:50.646855       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wdbg6\": pod kube-proxy-wdbg6 is already assigned to node \"ha-528700-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wdbg6" node="ha-528700-m04"
	E0603 11:33:50.657062       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9a608a96-b1eb-4766-b495-87fa01d45f7f(kube-system/kube-proxy-wdbg6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wdbg6"
	E0603 11:33:50.657360       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wdbg6\": pod kube-proxy-wdbg6 is already assigned to node \"ha-528700-m04\"" pod="kube-system/kube-proxy-wdbg6"
	I0603 11:33:50.657654       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wdbg6" node="ha-528700-m04"
	E0603 11:33:50.791713       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-llxv2\": pod kube-proxy-llxv2 is already assigned to node \"ha-528700-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-llxv2" node="ha-528700-m04"
	E0603 11:33:50.792379       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9283a594-ddab-4719-8848-e35a2a67065d(kube-system/kube-proxy-llxv2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-llxv2"
	E0603 11:33:50.792740       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-llxv2\": pod kube-proxy-llxv2 is already assigned to node \"ha-528700-m04\"" pod="kube-system/kube-proxy-llxv2"
	I0603 11:33:50.793424       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-llxv2" node="ha-528700-m04"
	
	
	==> kubelet <==
	Jun 03 11:43:40 ha-528700 kubelet[2228]: E0603 11:43:40.390498    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:43:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:43:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:43:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:43:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:44:40 ha-528700 kubelet[2228]: E0603 11:44:40.390658    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:44:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:44:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:44:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:44:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:45:40 ha-528700 kubelet[2228]: E0603 11:45:40.390418    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:45:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:45:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:45:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:45:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:46:40 ha-528700 kubelet[2228]: E0603 11:46:40.396304    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:46:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:46:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:46:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:46:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 11:47:40 ha-528700 kubelet[2228]: E0603 11:47:40.394263    2228 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 11:47:40 ha-528700 kubelet[2228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 11:47:40 ha-528700 kubelet[2228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 11:47:40 ha-528700 kubelet[2228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 11:47:40 ha-528700 kubelet[2228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:47:46.290327   11244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-528700 -n ha-528700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-528700 -n ha-528700: (12.2019024s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-528700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (44.11s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (183.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-841900
E0603 05:17:10.840885    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 05:18:39.510609    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-841900: exit status 90 (2m51.9592707s)

                                                
                                                
-- stdout --
	* [mount-start-2-841900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-841900
	* Restarting existing hyperv VM for "mount-start-2-841900" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:15:54.418268    7780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:17:17 mount-start-2-841900 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:17:17 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:17.764311031Z" level=info msg="Starting up"
	Jun 03 12:17:17 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:17.765493524Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:17:17 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:17.769091304Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.802746420Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.831035365Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.831121064Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.831236264Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.831276963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.832122359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.832242258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.832472057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.832567456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.832589456Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.832601656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.833047554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.834059248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.837303430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.837422930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.837632629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.837726628Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.838293325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.838418224Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.838469324Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.840738212Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.840959410Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.840991110Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.841026910Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.841043510Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.841122209Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.841716806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.841879805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842111804Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842289703Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842508302Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842624401Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842646001Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842661801Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842677901Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842692101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842707501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842728201Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842751901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842786400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842838500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842853000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842881700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842905400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.842985499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843004099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843018899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843057299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843071199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843084099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843101999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843119399Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843141998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843155698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843435197Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843651996Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843776395Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843796995Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843811795Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843822995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843837195Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.843856794Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.844179393Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.844238292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.844313192Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:17:17 mount-start-2-841900 dockerd[662]: time="2024-06-03T12:17:17.844357192Z" level=info msg="containerd successfully booted in 0.044925s"
	Jun 03 12:17:18 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:18.824095273Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:17:18 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:18.852387727Z" level=info msg="Loading containers: start."
	Jun 03 12:17:19 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:19.086172141Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:17:19 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:19.167585359Z" level=info msg="Loading containers: done."
	Jun 03 12:17:19 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:19.189080574Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:17:19 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:19.189656532Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:17:19 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:19.244734569Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:17:19 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:19.244812904Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:17:19 mount-start-2-841900 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:17:45 mount-start-2-841900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:17:45 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:45.110770483Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:17:45 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:45.113264984Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:17:45 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:45.113456792Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:17:45 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:45.113489194Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:17:45 mount-start-2-841900 dockerd[655]: time="2024-06-03T12:17:45.113819107Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jun 03 12:17:46 mount-start-2-841900 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:17:46 mount-start-2-841900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:17:46 mount-start-2-841900 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:17:46 mount-start-2-841900 dockerd[1026]: time="2024-06-03T12:17:46.189401201Z" level=info msg="Starting up"
	Jun 03 12:18:46 mount-start-2-841900 dockerd[1026]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:18:46 mount-start-2-841900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:18:46 mount-start-2-841900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:18:46 mount-start-2-841900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-841900" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-841900 -n mount-start-2-841900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-841900 -n mount-start-2-841900: exit status 6 (11.4003377s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:18:46.389146    7600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 05:18:57.630037    7600 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-841900" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-841900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (183.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (56.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- sh -c "ping -c 1 172.17.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4042655s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:27:11.940470    8288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.80.1) from pod (busybox-fc5497c4f-hmxqp): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-pm79t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-pm79t -- sh -c "ping -c 1 172.17.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-pm79t -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4183902s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:27:22.769924    3400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.80.1) from pod (busybox-fc5497c4f-pm79t): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-316400 -n multinode-316400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-316400 -n multinode-316400: (12.2155624s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 logs -n 25: (8.4290139s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-841900                           | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:12 PDT | 03 Jun 24 05:14 PDT |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:14 PDT |                     |
	|         | --profile mount-start-2-841900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-841900 ssh -- ls                    | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:14 PDT | 03 Jun 24 05:14 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-841900                           | mount-start-1-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:14 PDT | 03 Jun 24 05:15 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-841900 ssh -- ls                    | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:15 PDT | 03 Jun 24 05:15 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-841900                           | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:15 PDT | 03 Jun 24 05:15 PDT |
	| start   | -p mount-start-2-841900                           | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:15 PDT |                     |
	| delete  | -p mount-start-2-841900                           | mount-start-2-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:18 PDT | 03 Jun 24 05:20 PDT |
	| delete  | -p mount-start-1-841900                           | mount-start-1-841900 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:20 PDT | 03 Jun 24 05:20 PDT |
	| start   | -p multinode-316400                               | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:20 PDT | 03 Jun 24 05:26 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- apply -f                   | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- rollout                    | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- get pods -o                | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- get pods -o                | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-hmxqp --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-pm79t --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-hmxqp --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-pm79t --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-hmxqp -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-pm79t -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- get pods -o                | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-hmxqp                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT |                     |
	|         | busybox-fc5497c4f-hmxqp -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT | 03 Jun 24 05:27 PDT |
	|         | busybox-fc5497c4f-pm79t                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-316400 -- exec                       | multinode-316400     | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:27 PDT |                     |
	|         | busybox-fc5497c4f-pm79t -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 05:20:01
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 05:20:01.214297    6132 out.go:291] Setting OutFile to fd 1388 ...
	I0603 05:20:01.215105    6132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:20:01.215105    6132 out.go:304] Setting ErrFile to fd 1028...
	I0603 05:20:01.215105    6132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:20:01.238550    6132 out.go:298] Setting JSON to false
	I0603 05:20:01.242459    6132 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6429,"bootTime":1717410772,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 05:20:01.242459    6132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 05:20:01.248013    6132 out.go:177] * [multinode-316400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 05:20:01.252679    6132 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:20:01.251726    6132 notify.go:220] Checking for updates...
	I0603 05:20:01.254807    6132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 05:20:01.257918    6132 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 05:20:01.260289    6132 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 05:20:01.262410    6132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 05:20:01.266663    6132 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:20:01.267002    6132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 05:20:06.524941    6132 out.go:177] * Using the hyperv driver based on user configuration
	I0603 05:20:06.528837    6132 start.go:297] selected driver: hyperv
	I0603 05:20:06.528837    6132 start.go:901] validating driver "hyperv" against <nil>
	I0603 05:20:06.528837    6132 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 05:20:06.576582    6132 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 05:20:06.578173    6132 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:20:06.578322    6132 cni.go:84] Creating CNI manager for ""
	I0603 05:20:06.578322    6132 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 05:20:06.578322    6132 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 05:20:06.578441    6132 start.go:340] cluster config:
	{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:20:06.578441    6132 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 05:20:06.583119    6132 out.go:177] * Starting "multinode-316400" primary control-plane node in "multinode-316400" cluster
	I0603 05:20:06.586164    6132 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:20:06.586164    6132 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 05:20:06.586164    6132 cache.go:56] Caching tarball of preloaded images
	I0603 05:20:06.586164    6132 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:20:06.586164    6132 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:20:06.586164    6132 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:20:06.587500    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json: {Name:mk20032242448dc7fe5831a7a3a04a86e2b82540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:20:06.588116    6132 start.go:360] acquireMachinesLock for multinode-316400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:20:06.588116    6132 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-316400"
	I0603 05:20:06.588116    6132 start.go:93] Provisioning new machine with config: &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 05:20:06.588116    6132 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 05:20:06.591166    6132 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 05:20:06.592182    6132 start.go:159] libmachine.API.Create for "multinode-316400" (driver="hyperv")
	I0603 05:20:06.592182    6132 client.go:168] LocalClient.Create starting
	I0603 05:20:06.592182    6132 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 05:20:06.593178    6132 main.go:141] libmachine: Decoding PEM data...
	I0603 05:20:06.593178    6132 main.go:141] libmachine: Parsing certificate...
	I0603 05:20:06.593178    6132 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 05:20:06.593178    6132 main.go:141] libmachine: Decoding PEM data...
	I0603 05:20:06.593178    6132 main.go:141] libmachine: Parsing certificate...
	I0603 05:20:06.593178    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 05:20:08.594849    6132 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 05:20:08.595050    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:08.595151    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 05:20:10.281301    6132 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 05:20:10.281512    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:10.281649    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 05:20:11.736962    6132 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 05:20:11.737161    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:11.737262    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 05:20:15.271541    6132 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 05:20:15.271541    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:15.274870    6132 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 05:20:15.805545    6132 main.go:141] libmachine: Creating SSH key...
	I0603 05:20:16.186611    6132 main.go:141] libmachine: Creating VM...
	I0603 05:20:16.186611    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 05:20:19.028223    6132 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 05:20:19.028325    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:19.028325    6132 main.go:141] libmachine: Using switch "Default Switch"
	I0603 05:20:19.028494    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 05:20:20.766963    6132 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 05:20:20.766963    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:20.766963    6132 main.go:141] libmachine: Creating VHD
	I0603 05:20:20.766963    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 05:20:24.510990    6132 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 278AA735-82D6-4E79-BFBD-7EB35C186265
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 05:20:24.511040    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:24.511040    6132 main.go:141] libmachine: Writing magic tar header
	I0603 05:20:24.511040    6132 main.go:141] libmachine: Writing SSH key tar header
	I0603 05:20:24.523883    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 05:20:27.735621    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:27.735621    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:27.735621    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\disk.vhd' -SizeBytes 20000MB
	I0603 05:20:30.247561    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:30.247561    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:30.247561    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-316400 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 05:20:33.828438    6132 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-316400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 05:20:33.828438    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:33.829343    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-316400 -DynamicMemoryEnabled $false
	I0603 05:20:36.084169    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:36.084469    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:36.084469    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-316400 -Count 2
	I0603 05:20:38.240504    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:38.241485    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:38.241581    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-316400 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\boot2docker.iso'
	I0603 05:20:40.774905    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:40.774905    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:40.774905    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-316400 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\disk.vhd'
	I0603 05:20:43.497440    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:43.497773    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:43.497773    6132 main.go:141] libmachine: Starting VM...
	I0603 05:20:43.497773    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400
	I0603 05:20:46.649327    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:46.649327    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:46.649327    6132 main.go:141] libmachine: Waiting for host to start...
	I0603 05:20:46.649649    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:20:48.948619    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:20:48.948619    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:48.948619    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:20:51.538179    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:51.538179    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:52.553992    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:20:54.780280    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:20:54.781175    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:54.781276    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:20:57.351816    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:20:57.351816    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:20:58.357320    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:00.566852    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:00.566913    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:00.567008    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:03.119193    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:21:03.119276    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:04.131383    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:06.344571    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:06.344571    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:06.344969    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:08.868059    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:21:08.868539    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:09.873293    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:12.160568    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:12.160870    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:12.160870    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:14.706969    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:14.706969    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:14.707560    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:16.837486    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:16.837486    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:16.837486    6132 machine.go:94] provisionDockerMachine start ...
	I0603 05:21:16.838667    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:19.003590    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:19.003590    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:19.004168    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:21.509946    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:21.510071    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:21.519035    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:21:21.532592    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:21:21.532699    6132 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 05:21:21.664796    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 05:21:21.664796    6132 buildroot.go:166] provisioning hostname "multinode-316400"
	I0603 05:21:21.664916    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:23.761538    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:23.761538    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:23.761854    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:26.343608    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:26.343608    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:26.351003    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:21:26.351666    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:21:26.351666    6132 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-316400 && echo "multinode-316400" | sudo tee /etc/hostname
	I0603 05:21:26.539227    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-316400
	
	I0603 05:21:26.539356    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:28.631776    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:28.631833    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:28.631833    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:31.112572    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:31.113586    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:31.119393    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:21:31.120274    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:21:31.120274    6132 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-316400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-316400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-316400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 05:21:31.267872    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 05:21:31.267872    6132 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 05:21:31.267872    6132 buildroot.go:174] setting up certificates
	I0603 05:21:31.267872    6132 provision.go:84] configureAuth start
	I0603 05:21:31.267872    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:33.348993    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:33.349618    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:33.349711    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:35.843484    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:35.843484    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:35.844233    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:37.976133    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:37.976308    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:37.976308    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:40.444625    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:40.444625    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:40.444723    6132 provision.go:143] copyHostCerts
	I0603 05:21:40.444853    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 05:21:40.444853    6132 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 05:21:40.444853    6132 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 05:21:40.445562    6132 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 05:21:40.446784    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 05:21:40.447322    6132 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 05:21:40.447322    6132 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 05:21:40.447797    6132 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 05:21:40.449155    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 05:21:40.449155    6132 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 05:21:40.449155    6132 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 05:21:40.449976    6132 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 05:21:40.450749    6132 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-316400 san=[127.0.0.1 172.17.87.47 localhost minikube multinode-316400]
	I0603 05:21:40.774763    6132 provision.go:177] copyRemoteCerts
	I0603 05:21:40.786608    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 05:21:40.786608    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:42.881747    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:42.881747    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:42.881747    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:45.390732    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:45.390732    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:45.391866    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:21:45.498063    6132 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7112868s)
	I0603 05:21:45.498361    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 05:21:45.499281    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 05:21:45.543351    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 05:21:45.543351    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 05:21:45.584690    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 05:21:45.584690    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 05:21:45.627366    6132 provision.go:87] duration metric: took 14.3594449s to configureAuth
	I0603 05:21:45.627452    6132 buildroot.go:189] setting minikube options for container-runtime
	I0603 05:21:45.627509    6132 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:21:45.627509    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:47.704168    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:47.704256    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:47.704321    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:50.213920    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:50.214907    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:50.219869    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:21:50.220396    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:21:50.220536    6132 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 05:21:50.356201    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 05:21:50.356310    6132 buildroot.go:70] root file system type: tmpfs
	I0603 05:21:50.356474    6132 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 05:21:50.356566    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:52.445088    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:52.445169    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:52.445291    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:54.914143    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:54.914143    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:54.919760    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:21:54.920513    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:21:54.920513    6132 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 05:21:55.079966    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 05:21:55.080162    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:21:57.138958    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:21:57.138958    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:57.139365    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:21:59.635312    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:21:59.636178    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:21:59.642827    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:21:59.643369    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:21:59.643369    6132 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 05:22:01.720993    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 05:22:01.720993    6132 machine.go:97] duration metric: took 44.8833548s to provisionDockerMachine
	I0603 05:22:01.721085    6132 client.go:171] duration metric: took 1m55.1285119s to LocalClient.Create
	I0603 05:22:01.721085    6132 start.go:167] duration metric: took 1m55.1285119s to libmachine.API.Create "multinode-316400"
	I0603 05:22:01.721161    6132 start.go:293] postStartSetup for "multinode-316400" (driver="hyperv")
	I0603 05:22:01.721161    6132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 05:22:01.733053    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 05:22:01.733053    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:03.829349    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:03.829349    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:03.829349    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:06.323380    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:06.323380    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:06.323967    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:22:06.428080    6132 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6950116s)
	I0603 05:22:06.443818    6132 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 05:22:06.449516    6132 command_runner.go:130] > NAME=Buildroot
	I0603 05:22:06.449516    6132 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 05:22:06.449516    6132 command_runner.go:130] > ID=buildroot
	I0603 05:22:06.449516    6132 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 05:22:06.449516    6132 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 05:22:06.450331    6132 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 05:22:06.450426    6132 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 05:22:06.450813    6132 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 05:22:06.452000    6132 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 05:22:06.452086    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 05:22:06.467850    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 05:22:06.485666    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 05:22:06.530474    6132 start.go:296] duration metric: took 4.8092967s for postStartSetup
	I0603 05:22:06.533966    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:08.638165    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:08.638165    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:08.638246    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:11.158067    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:11.159182    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:11.159182    6132 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:22:11.162091    6132 start.go:128] duration metric: took 2m4.5735509s to createHost
	I0603 05:22:11.162091    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:13.283059    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:13.283855    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:13.283855    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:15.792502    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:15.792502    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:15.797842    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:22:15.797842    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:22:15.797842    6132 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 05:22:15.939427    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417335.940850230
	
	I0603 05:22:15.939969    6132 fix.go:216] guest clock: 1717417335.940850230
	I0603 05:22:15.939969    6132 fix.go:229] Guest: 2024-06-03 05:22:15.94085023 -0700 PDT Remote: 2024-06-03 05:22:11.1620914 -0700 PDT m=+130.035848001 (delta=4.77875883s)
	I0603 05:22:15.940053    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:18.066141    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:18.066141    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:18.066236    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:20.555242    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:20.555242    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:20.563655    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:22:20.564467    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.87.47 22 <nil> <nil>}
	I0603 05:22:20.564467    6132 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717417335
	I0603 05:22:20.703735    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:22:15 UTC 2024
	
	I0603 05:22:20.703735    6132 fix.go:236] clock set: Mon Jun  3 12:22:15 UTC 2024
	 (err=<nil>)
	I0603 05:22:20.703735    6132 start.go:83] releasing machines lock for "multinode-316400", held for 2m14.1151622s
	I0603 05:22:20.704022    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:22.808898    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:22.809099    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:22.809192    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:25.343317    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:25.343695    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:25.349158    6132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 05:22:25.349158    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:25.359095    6132 ssh_runner.go:195] Run: cat /version.json
	I0603 05:22:25.359095    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:22:27.624016    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:27.624942    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:27.624942    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:27.624942    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:22:27.624942    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:27.626288    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:22:30.270205    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:30.271027    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:30.271098    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:22:30.297710    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:22:30.297780    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:22:30.297780    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:22:30.458673    6132 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 05:22:30.458830    6132 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1096548s)
	I0603 05:22:30.458937    6132 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 05:22:30.458937    6132 ssh_runner.go:235] Completed: cat /version.json: (5.0998245s)
	I0603 05:22:30.471032    6132 ssh_runner.go:195] Run: systemctl --version
	I0603 05:22:30.480873    6132 command_runner.go:130] > systemd 252 (252)
	I0603 05:22:30.480873    6132 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 05:22:30.497907    6132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 05:22:30.506839    6132 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 05:22:30.507216    6132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 05:22:30.519707    6132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 05:22:30.547178    6132 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 05:22:30.547430    6132 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 05:22:30.547485    6132 start.go:494] detecting cgroup driver to use...
	I0603 05:22:30.547542    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:22:30.579291    6132 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 05:22:30.592771    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 05:22:30.625824    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 05:22:30.643253    6132 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 05:22:30.655363    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 05:22:30.685596    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:22:30.715347    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 05:22:30.745227    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:22:30.776995    6132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 05:22:30.807919    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 05:22:30.839909    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 05:22:30.874311    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 05:22:30.910384    6132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 05:22:30.929655    6132 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 05:22:30.945998    6132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 05:22:30.977930    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:22:31.173351    6132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 05:22:31.202744    6132 start.go:494] detecting cgroup driver to use...
	I0603 05:22:31.214965    6132 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 05:22:31.238480    6132 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 05:22:31.238529    6132 command_runner.go:130] > [Unit]
	I0603 05:22:31.238529    6132 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 05:22:31.238529    6132 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 05:22:31.238529    6132 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 05:22:31.238529    6132 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 05:22:31.238607    6132 command_runner.go:130] > StartLimitBurst=3
	I0603 05:22:31.238607    6132 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 05:22:31.238607    6132 command_runner.go:130] > [Service]
	I0603 05:22:31.238607    6132 command_runner.go:130] > Type=notify
	I0603 05:22:31.238607    6132 command_runner.go:130] > Restart=on-failure
	I0603 05:22:31.238607    6132 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 05:22:31.238657    6132 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 05:22:31.238657    6132 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 05:22:31.238657    6132 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 05:22:31.238657    6132 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 05:22:31.238657    6132 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 05:22:31.238721    6132 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 05:22:31.238721    6132 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 05:22:31.238721    6132 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 05:22:31.238780    6132 command_runner.go:130] > ExecStart=
	I0603 05:22:31.238780    6132 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 05:22:31.238780    6132 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 05:22:31.238839    6132 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 05:22:31.238839    6132 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 05:22:31.238839    6132 command_runner.go:130] > LimitNOFILE=infinity
	I0603 05:22:31.238839    6132 command_runner.go:130] > LimitNPROC=infinity
	I0603 05:22:31.238839    6132 command_runner.go:130] > LimitCORE=infinity
	I0603 05:22:31.238897    6132 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 05:22:31.238897    6132 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 05:22:31.238897    6132 command_runner.go:130] > TasksMax=infinity
	I0603 05:22:31.238897    6132 command_runner.go:130] > TimeoutStartSec=0
	I0603 05:22:31.238897    6132 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 05:22:31.238956    6132 command_runner.go:130] > Delegate=yes
	I0603 05:22:31.238956    6132 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 05:22:31.238956    6132 command_runner.go:130] > KillMode=process
	I0603 05:22:31.238956    6132 command_runner.go:130] > [Install]
	I0603 05:22:31.238956    6132 command_runner.go:130] > WantedBy=multi-user.target
	I0603 05:22:31.250045    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:22:31.280405    6132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 05:22:31.321762    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:22:31.361171    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:22:31.393419    6132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 05:22:31.456409    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:22:31.483692    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:22:31.516898    6132 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 05:22:31.529240    6132 ssh_runner.go:195] Run: which cri-dockerd
	I0603 05:22:31.535028    6132 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 05:22:31.547771    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 05:22:31.564856    6132 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 05:22:31.606170    6132 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 05:22:31.804906    6132 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 05:22:32.012934    6132 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 05:22:32.013226    6132 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 05:22:32.072627    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:22:32.258105    6132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:22:34.755438    6132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4973242s)
	I0603 05:22:34.769424    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 05:22:34.803062    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:22:34.851740    6132 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 05:22:35.040411    6132 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 05:22:35.228145    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:22:35.417031    6132 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 05:22:35.461243    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:22:35.497745    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:22:35.703147    6132 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 05:22:35.808362    6132 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 05:22:35.821029    6132 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 05:22:35.829129    6132 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 05:22:35.829311    6132 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 05:22:35.829311    6132 command_runner.go:130] > Device: 0,22	Inode: 878         Links: 1
	I0603 05:22:35.829311    6132 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 05:22:35.829311    6132 command_runner.go:130] > Access: 2024-06-03 12:22:35.733205925 +0000
	I0603 05:22:35.829371    6132 command_runner.go:130] > Modify: 2024-06-03 12:22:35.733205925 +0000
	I0603 05:22:35.829371    6132 command_runner.go:130] > Change: 2024-06-03 12:22:35.736205928 +0000
	I0603 05:22:35.829371    6132 command_runner.go:130] >  Birth: -
	I0603 05:22:35.830042    6132 start.go:562] Will wait 60s for crictl version
	I0603 05:22:35.841912    6132 ssh_runner.go:195] Run: which crictl
	I0603 05:22:35.847943    6132 command_runner.go:130] > /usr/bin/crictl
	I0603 05:22:35.858164    6132 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 05:22:35.905735    6132 command_runner.go:130] > Version:  0.1.0
	I0603 05:22:35.906424    6132 command_runner.go:130] > RuntimeName:  docker
	I0603 05:22:35.906424    6132 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 05:22:35.906424    6132 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 05:22:35.906525    6132 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 05:22:35.914940    6132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:22:35.944752    6132 command_runner.go:130] > 26.0.2
	I0603 05:22:35.954776    6132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:22:35.985027    6132 command_runner.go:130] > 26.0.2
	I0603 05:22:35.989598    6132 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 05:22:35.989598    6132 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 05:22:35.994630    6132 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 05:22:35.994630    6132 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 05:22:35.994630    6132 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 05:22:35.994630    6132 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 05:22:35.997597    6132 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 05:22:35.997597    6132 ip.go:210] interface addr: 172.17.80.1/20
	I0603 05:22:36.008595    6132 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 05:22:36.015286    6132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:22:36.034810    6132 kubeadm.go:877] updating cluster {Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 05:22:36.034810    6132 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:22:36.044009    6132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 05:22:36.068631    6132 docker.go:685] Got preloaded images: 
	I0603 05:22:36.068631    6132 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 05:22:36.080665    6132 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 05:22:36.098638    6132 command_runner.go:139] > {"Repositories":{}}
	I0603 05:22:36.110623    6132 ssh_runner.go:195] Run: which lz4
	I0603 05:22:36.115645    6132 command_runner.go:130] > /usr/bin/lz4
	I0603 05:22:36.116746    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 05:22:36.129059    6132 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 05:22:36.134636    6132 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 05:22:36.135542    6132 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 05:22:36.135542    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 05:22:38.030915    6132 docker.go:649] duration metric: took 1.9136999s to copy over tarball
	I0603 05:22:38.048920    6132 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 05:22:46.598983    6132 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5500344s)
	I0603 05:22:46.599121    6132 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 05:22:46.664413    6132 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 05:22:46.684701    6132 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0603 05:22:46.684840    6132 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 05:22:46.730356    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:22:46.942612    6132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:22:49.914087    6132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9714644s)
	I0603 05:22:49.923069    6132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 05:22:49.945217    6132 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 05:22:49.945217    6132 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 05:22:49.945217    6132 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 05:22:49.945217    6132 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 05:22:49.945338    6132 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 05:22:49.945338    6132 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 05:22:49.945338    6132 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 05:22:49.945338    6132 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 05:22:49.945338    6132 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 05:22:49.945486    6132 cache_images.go:84] Images are preloaded, skipping loading
	I0603 05:22:49.945486    6132 kubeadm.go:928] updating node { 172.17.87.47 8443 v1.30.1 docker true true} ...
	I0603 05:22:49.945486    6132 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-316400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.87.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 05:22:49.954765    6132 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 05:22:49.988408    6132 command_runner.go:130] > cgroupfs
	I0603 05:22:49.989192    6132 cni.go:84] Creating CNI manager for ""
	I0603 05:22:49.989192    6132 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 05:22:49.989262    6132 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 05:22:49.989262    6132 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.87.47 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-316400 NodeName:multinode-316400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.87.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.87.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 05:22:49.989641    6132 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.87.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-316400"
	  kubeletExtraArgs:
	    node-ip: 172.17.87.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.87.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 05:22:50.002196    6132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 05:22:50.019818    6132 command_runner.go:130] > kubeadm
	I0603 05:22:50.019818    6132 command_runner.go:130] > kubectl
	I0603 05:22:50.019818    6132 command_runner.go:130] > kubelet
	I0603 05:22:50.019818    6132 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 05:22:50.029822    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 05:22:50.048654    6132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0603 05:22:50.077290    6132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 05:22:50.104566    6132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0603 05:22:50.144556    6132 ssh_runner.go:195] Run: grep 172.17.87.47	control-plane.minikube.internal$ /etc/hosts
	I0603 05:22:50.149444    6132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.87.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:22:50.177869    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:22:50.354940    6132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:22:50.383938    6132 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400 for IP: 172.17.87.47
	I0603 05:22:50.383976    6132 certs.go:194] generating shared ca certs ...
	I0603 05:22:50.384020    6132 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.384594    6132 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 05:22:50.384594    6132 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 05:22:50.385264    6132 certs.go:256] generating profile certs ...
	I0603 05:22:50.385424    6132 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.key
	I0603 05:22:50.385424    6132 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.crt with IP's: []
	I0603 05:22:50.547465    6132 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.crt ...
	I0603 05:22:50.547465    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.crt: {Name:mk6d6838756bb27d97d046d8ad494d34e245100d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.549465    6132 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.key ...
	I0603 05:22:50.549465    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.key: {Name:mk82491164b800e3533738657e4e736f8a6a0608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.550480    6132 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.36bc5c1c
	I0603 05:22:50.550480    6132 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.36bc5c1c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.87.47]
	I0603 05:22:50.699095    6132 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.36bc5c1c ...
	I0603 05:22:50.699095    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.36bc5c1c: {Name:mk9acbafbdfbcbef113e7d1931a24b2f89e544a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.700154    6132 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.36bc5c1c ...
	I0603 05:22:50.701147    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.36bc5c1c: {Name:mkba9ba31184e32ff7f5a07c2a98c6639838b837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.702145    6132 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.36bc5c1c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt
	I0603 05:22:50.713123    6132 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.36bc5c1c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key
	I0603 05:22:50.714135    6132 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key
	I0603 05:22:50.714135    6132 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt with IP's: []
	I0603 05:22:50.843384    6132 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt ...
	I0603 05:22:50.843384    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt: {Name:mkfbf2b6756c09a9410292ffc9b8dc92c98e541a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.844974    6132 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key ...
	I0603 05:22:50.844974    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key: {Name:mka21c2e9ccc78a08120f66752efddf7085eb884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:22:50.846137    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 05:22:50.846137    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 05:22:50.846137    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 05:22:50.846665    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 05:22:50.846876    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 05:22:50.846876    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 05:22:50.846876    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 05:22:50.856415    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 05:22:50.857439    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 05:22:50.857439    6132 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 05:22:50.857439    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 05:22:50.858364    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 05:22:50.858364    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 05:22:50.858364    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 05:22:50.858364    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 05:22:50.859365    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 05:22:50.859365    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:22:50.859365    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 05:22:50.860367    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 05:22:50.908538    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 05:22:50.950450    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 05:22:50.998146    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 05:22:51.039955    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 05:22:51.084760    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 05:22:51.125751    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 05:22:51.166455    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 05:22:51.213712    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 05:22:51.255231    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 05:22:51.301351    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 05:22:51.344564    6132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 05:22:51.384789    6132 ssh_runner.go:195] Run: openssl version
	I0603 05:22:51.393415    6132 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 05:22:51.404927    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 05:22:51.435113    6132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 05:22:51.447529    6132 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:22:51.448011    6132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:22:51.453697    6132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 05:22:51.466626    6132 command_runner.go:130] > 51391683
	I0603 05:22:51.475620    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 05:22:51.506112    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 05:22:51.538684    6132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 05:22:51.548010    6132 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:22:51.548010    6132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:22:51.561975    6132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 05:22:51.570708    6132 command_runner.go:130] > 3ec20f2e
	I0603 05:22:51.585122    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 05:22:51.618121    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 05:22:51.649459    6132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:22:51.655719    6132 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:22:51.655719    6132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:22:51.668853    6132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:22:51.677116    6132 command_runner.go:130] > b5213941
	I0603 05:22:51.688477    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 05:22:51.721414    6132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:22:51.727218    6132 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:22:51.727573    6132 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:22:51.728049    6132 kubeadm.go:391] StartCluster: {Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:22:51.736146    6132 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 05:22:51.767916    6132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 05:22:51.784857    6132 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0603 05:22:51.784906    6132 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0603 05:22:51.784906    6132 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0603 05:22:51.799314    6132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 05:22:51.827925    6132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 05:22:51.842227    6132 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0603 05:22:51.842926    6132 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0603 05:22:51.843100    6132 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0603 05:22:51.843100    6132 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 05:22:51.843100    6132 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 05:22:51.843100    6132 kubeadm.go:156] found existing configuration files:
	
	I0603 05:22:51.854689    6132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 05:22:51.869118    6132 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 05:22:51.869758    6132 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 05:22:51.882080    6132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 05:22:51.933407    6132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 05:22:51.947258    6132 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 05:22:51.947357    6132 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 05:22:51.956754    6132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 05:22:51.983750    6132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 05:22:52.001442    6132 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 05:22:52.002097    6132 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 05:22:52.013800    6132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 05:22:52.047834    6132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 05:22:52.066504    6132 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 05:22:52.067000    6132 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 05:22:52.082741    6132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 05:22:52.100852    6132 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 05:22:52.453858    6132 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 05:22:52.453858    6132 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 05:23:04.836016    6132 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 05:23:04.836016    6132 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0603 05:23:04.836190    6132 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 05:23:04.836190    6132 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 05:23:04.836447    6132 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 05:23:04.836447    6132 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 05:23:04.836447    6132 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 05:23:04.836447    6132 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 05:23:04.836447    6132 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 05:23:04.836983    6132 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 05:23:04.837323    6132 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 05:23:04.837323    6132 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 05:23:04.840808    6132 out.go:204]   - Generating certificates and keys ...
	I0603 05:23:04.841098    6132 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 05:23:04.841161    6132 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 05:23:04.841462    6132 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 05:23:04.841541    6132 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 05:23:04.841737    6132 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 05:23:04.841737    6132 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 05:23:04.841990    6132 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0603 05:23:04.841990    6132 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 05:23:04.842193    6132 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0603 05:23:04.842193    6132 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 05:23:04.842367    6132 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0603 05:23:04.842406    6132 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 05:23:04.842613    6132 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0603 05:23:04.842613    6132 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 05:23:04.842786    6132 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-316400] and IPs [172.17.87.47 127.0.0.1 ::1]
	I0603 05:23:04.842786    6132 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-316400] and IPs [172.17.87.47 127.0.0.1 ::1]
	I0603 05:23:04.842786    6132 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 05:23:04.843332    6132 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0603 05:23:04.843377    6132 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-316400] and IPs [172.17.87.47 127.0.0.1 ::1]
	I0603 05:23:04.843377    6132 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-316400] and IPs [172.17.87.47 127.0.0.1 ::1]
	I0603 05:23:04.843377    6132 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 05:23:04.843377    6132 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 05:23:04.843377    6132 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 05:23:04.843911    6132 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 05:23:04.843911    6132 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 05:23:04.843911    6132 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0603 05:23:04.844215    6132 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 05:23:04.844381    6132 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 05:23:04.844444    6132 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 05:23:04.844444    6132 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 05:23:04.844444    6132 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 05:23:04.844444    6132 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 05:23:04.844444    6132 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 05:23:04.844444    6132 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 05:23:04.844444    6132 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 05:23:04.844444    6132 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 05:23:04.845163    6132 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 05:23:04.845163    6132 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 05:23:04.845163    6132 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 05:23:04.845163    6132 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 05:23:04.845163    6132 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 05:23:04.845163    6132 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 05:23:04.850989    6132 out.go:204]   - Booting up control plane ...
	I0603 05:23:04.851054    6132 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 05:23:04.851054    6132 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 05:23:04.851054    6132 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 05:23:04.851054    6132 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 05:23:04.851054    6132 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 05:23:04.851054    6132 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 05:23:04.851734    6132 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:23:04.851734    6132 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:23:04.851734    6132 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:23:04.851734    6132 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:23:04.851734    6132 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 05:23:04.851734    6132 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 05:23:04.852555    6132 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 05:23:04.852603    6132 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 05:23:04.852679    6132 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 05:23:04.852679    6132 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 05:23:04.852679    6132 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.630053ms
	I0603 05:23:04.852679    6132 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.630053ms
	I0603 05:23:04.852679    6132 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 05:23:04.852679    6132 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 05:23:04.853256    6132 kubeadm.go:309] [api-check] The API server is healthy after 7.003434359s
	I0603 05:23:04.853256    6132 command_runner.go:130] > [api-check] The API server is healthy after 7.003434359s
	I0603 05:23:04.853606    6132 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 05:23:04.853606    6132 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 05:23:04.853746    6132 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 05:23:04.853746    6132 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 05:23:04.853746    6132 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0603 05:23:04.853746    6132 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 05:23:04.854601    6132 command_runner.go:130] > [mark-control-plane] Marking the node multinode-316400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 05:23:04.854601    6132 kubeadm.go:309] [mark-control-plane] Marking the node multinode-316400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 05:23:04.854601    6132 kubeadm.go:309] [bootstrap-token] Using token: 312xwp.7gjf7t4r0dswugfe
	I0603 05:23:04.854601    6132 command_runner.go:130] > [bootstrap-token] Using token: 312xwp.7gjf7t4r0dswugfe
	I0603 05:23:04.856650    6132 out.go:204]   - Configuring RBAC rules ...
	I0603 05:23:04.856650    6132 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 05:23:04.856650    6132 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 05:23:04.856650    6132 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 05:23:04.857746    6132 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 05:23:04.857951    6132 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 05:23:04.857951    6132 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 05:23:04.858343    6132 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 05:23:04.858383    6132 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 05:23:04.858517    6132 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 05:23:04.858517    6132 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 05:23:04.858847    6132 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 05:23:04.858847    6132 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 05:23:04.859048    6132 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 05:23:04.859048    6132 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 05:23:04.859048    6132 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 05:23:04.859255    6132 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 05:23:04.859309    6132 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 05:23:04.859309    6132 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 05:23:04.859309    6132 kubeadm.go:309] 
	I0603 05:23:04.859309    6132 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0603 05:23:04.859309    6132 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 05:23:04.859309    6132 kubeadm.go:309] 
	I0603 05:23:04.859309    6132 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0603 05:23:04.859309    6132 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 05:23:04.859309    6132 kubeadm.go:309] 
	I0603 05:23:04.859904    6132 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0603 05:23:04.860024    6132 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 05:23:04.860188    6132 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 05:23:04.860236    6132 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 05:23:04.860326    6132 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 05:23:04.860370    6132 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 05:23:04.860370    6132 kubeadm.go:309] 
	I0603 05:23:04.860603    6132 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0603 05:23:04.860655    6132 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 05:23:04.860655    6132 kubeadm.go:309] 
	I0603 05:23:04.860753    6132 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 05:23:04.860807    6132 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 05:23:04.860881    6132 kubeadm.go:309] 
	I0603 05:23:04.861035    6132 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 05:23:04.861093    6132 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0603 05:23:04.861159    6132 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 05:23:04.861159    6132 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 05:23:04.861159    6132 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 05:23:04.861865    6132 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 05:23:04.862038    6132 kubeadm.go:309] 
	I0603 05:23:04.862915    6132 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0603 05:23:04.862915    6132 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 05:23:04.863294    6132 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0603 05:23:04.863294    6132 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 05:23:04.863294    6132 kubeadm.go:309] 
	I0603 05:23:04.863812    6132 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 312xwp.7gjf7t4r0dswugfe \
	I0603 05:23:04.863921    6132 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 312xwp.7gjf7t4r0dswugfe \
	I0603 05:23:04.864058    6132 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 \
	I0603 05:23:04.864058    6132 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 \
	I0603 05:23:04.864058    6132 command_runner.go:130] > 	--control-plane 
	I0603 05:23:04.864058    6132 kubeadm.go:309] 	--control-plane 
	I0603 05:23:04.864340    6132 kubeadm.go:309] 
	I0603 05:23:04.864404    6132 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0603 05:23:04.864404    6132 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 05:23:04.864404    6132 kubeadm.go:309] 
	I0603 05:23:04.864647    6132 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 312xwp.7gjf7t4r0dswugfe \
	I0603 05:23:04.864647    6132 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 312xwp.7gjf7t4r0dswugfe \
	I0603 05:23:04.864820    6132 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 05:23:04.864820    6132 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 05:23:04.864820    6132 cni.go:84] Creating CNI manager for ""
	I0603 05:23:04.864820    6132 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 05:23:04.873695    6132 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 05:23:04.887903    6132 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 05:23:04.900866    6132 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0603 05:23:04.900866    6132 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0603 05:23:04.900866    6132 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0603 05:23:04.900948    6132 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 05:23:04.900948    6132 command_runner.go:130] > Access: 2024-06-03 12:21:10.779883800 +0000
	I0603 05:23:04.900948    6132 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0603 05:23:04.900948    6132 command_runner.go:130] > Change: 2024-06-03 05:21:02.600000000 +0000
	I0603 05:23:04.900948    6132 command_runner.go:130] >  Birth: -
	I0603 05:23:04.901057    6132 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 05:23:04.901109    6132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 05:23:04.946645    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 05:23:05.271495    6132 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0603 05:23:05.271564    6132 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0603 05:23:05.271564    6132 command_runner.go:130] > serviceaccount/kindnet created
	I0603 05:23:05.271564    6132 command_runner.go:130] > daemonset.apps/kindnet created
	I0603 05:23:05.271564    6132 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 05:23:05.286836    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-316400 minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=multinode-316400 minikube.k8s.io/primary=true
	I0603 05:23:05.286836    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:05.297065    6132 command_runner.go:130] > -16
	I0603 05:23:05.297294    6132 ops.go:34] apiserver oom_adj: -16
	I0603 05:23:05.437892    6132 command_runner.go:130] > node/multinode-316400 labeled
	I0603 05:23:05.440618    6132 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0603 05:23:05.452269    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:05.621279    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:05.964750    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:06.079591    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:06.467795    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:06.580959    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:06.955340    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:07.058314    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:07.458491    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:07.559221    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:07.962263    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:08.065635    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:08.464714    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:08.568723    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:08.964553    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:09.079357    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:09.463890    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:09.566115    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:09.967489    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:10.087291    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:10.453561    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:10.565437    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:10.957014    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:11.064070    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:11.460196    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:11.584186    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:11.958918    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:12.086797    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:12.460298    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:12.574597    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:12.965819    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:13.073054    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:13.452537    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:13.555898    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:13.953995    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:14.100612    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:14.459160    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:14.572984    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:14.964118    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:15.078109    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:15.465726    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:15.574606    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:15.964543    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:16.079233    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:16.454277    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:16.566378    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:16.957997    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:17.068298    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:17.461864    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:17.568680    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:17.961711    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:18.180720    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:18.465551    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:18.586096    6132 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 05:23:18.967723    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 05:23:19.143778    6132 command_runner.go:130] > NAME      SECRETS   AGE
	I0603 05:23:19.144548    6132 command_runner.go:130] > default   0         1s
	I0603 05:23:19.144548    6132 kubeadm.go:1107] duration metric: took 13.8729362s to wait for elevateKubeSystemPrivileges
	W0603 05:23:19.144682    6132 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 05:23:19.144682    6132 kubeadm.go:393] duration metric: took 27.4165873s to StartCluster
	I0603 05:23:19.144728    6132 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:23:19.144728    6132 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:23:19.147089    6132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:23:19.148569    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 05:23:19.148779    6132 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 05:23:19.152199    6132 out.go:177] * Verifying Kubernetes components...
	I0603 05:23:19.148779    6132 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 05:23:19.149143    6132 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:23:19.161249    6132 addons.go:69] Setting storage-provisioner=true in profile "multinode-316400"
	I0603 05:23:19.161249    6132 addons.go:234] Setting addon storage-provisioner=true in "multinode-316400"
	I0603 05:23:19.161249    6132 addons.go:69] Setting default-storageclass=true in profile "multinode-316400"
	I0603 05:23:19.161249    6132 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:23:19.161249    6132 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-316400"
	I0603 05:23:19.163281    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:23:19.164246    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:23:19.175265    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:23:19.563054    6132 command_runner.go:130] > apiVersion: v1
	I0603 05:23:19.563170    6132 command_runner.go:130] > data:
	I0603 05:23:19.563170    6132 command_runner.go:130] >   Corefile: |
	I0603 05:23:19.563170    6132 command_runner.go:130] >     .:53 {
	I0603 05:23:19.563220    6132 command_runner.go:130] >         errors
	I0603 05:23:19.563220    6132 command_runner.go:130] >         health {
	I0603 05:23:19.563220    6132 command_runner.go:130] >            lameduck 5s
	I0603 05:23:19.563220    6132 command_runner.go:130] >         }
	I0603 05:23:19.563220    6132 command_runner.go:130] >         ready
	I0603 05:23:19.563284    6132 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0603 05:23:19.563284    6132 command_runner.go:130] >            pods insecure
	I0603 05:23:19.563335    6132 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0603 05:23:19.563335    6132 command_runner.go:130] >            ttl 30
	I0603 05:23:19.563335    6132 command_runner.go:130] >         }
	I0603 05:23:19.563335    6132 command_runner.go:130] >         prometheus :9153
	I0603 05:23:19.563390    6132 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0603 05:23:19.563390    6132 command_runner.go:130] >            max_concurrent 1000
	I0603 05:23:19.563390    6132 command_runner.go:130] >         }
	I0603 05:23:19.563390    6132 command_runner.go:130] >         cache 30
	I0603 05:23:19.563458    6132 command_runner.go:130] >         loop
	I0603 05:23:19.563458    6132 command_runner.go:130] >         reload
	I0603 05:23:19.563458    6132 command_runner.go:130] >         loadbalance
	I0603 05:23:19.563512    6132 command_runner.go:130] >     }
	I0603 05:23:19.563577    6132 command_runner.go:130] > kind: ConfigMap
	I0603 05:23:19.563577    6132 command_runner.go:130] > metadata:
	I0603 05:23:19.563645    6132 command_runner.go:130] >   creationTimestamp: "2024-06-03T12:23:04Z"
	I0603 05:23:19.563645    6132 command_runner.go:130] >   name: coredns
	I0603 05:23:19.563709    6132 command_runner.go:130] >   namespace: kube-system
	I0603 05:23:19.563755    6132 command_runner.go:130] >   resourceVersion: "226"
	I0603 05:23:19.563755    6132 command_runner.go:130] >   uid: a16295d1-17ca-4e60-8fd1-1a25044b972a
	I0603 05:23:19.564283    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 05:23:19.638539    6132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:23:20.104729    6132 command_runner.go:130] > configmap/coredns replaced
	I0603 05:23:20.104844    6132 start.go:946] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0603 05:23:20.106661    6132 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:23:20.106661    6132 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:23:20.108143    6132 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.87.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:23:20.108103    6132 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.87.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:23:20.110212    6132 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 05:23:20.110298    6132 node_ready.go:35] waiting up to 6m0s for node "multinode-316400" to be "Ready" ...
	I0603 05:23:20.110928    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:20.111028    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:20.110928    6132 round_trippers.go:463] GET https://172.17.87.47:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0603 05:23:20.111028    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:20.111028    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:20.111028    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:20.111028    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:20.111028    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:20.133215    6132 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0603 05:23:20.133787    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:20.133787    6132 round_trippers.go:580]     Content-Length: 291
	I0603 05:23:20.133787    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:20 GMT
	I0603 05:23:20.133787    6132 round_trippers.go:580]     Audit-Id: faedf4f9-5ee0-414a-98d4-b9365b088434
	I0603 05:23:20.133787    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:20.133787    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:20.133906    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:20.133906    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:20.133906    6132 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0603 05:23:20.133973    6132 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ceff0b29-f826-4fa1-b094-52b37decec6e","resourceVersion":"359","creationTimestamp":"2024-06-03T12:23:04Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0603 05:23:20.134072    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:20.134235    6132 round_trippers.go:580]     Audit-Id: c3058c28-843f-4d23-9a5b-13b446eb7c37
	I0603 05:23:20.134235    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:20.134235    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:20.134235    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:20.134235    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:20.134235    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:20 GMT
	I0603 05:23:20.134465    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:20.135157    6132 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ceff0b29-f826-4fa1-b094-52b37decec6e","resourceVersion":"359","creationTimestamp":"2024-06-03T12:23:04Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0603 05:23:20.135252    6132 round_trippers.go:463] PUT https://172.17.87.47:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0603 05:23:20.135344    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:20.135344    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:20.135344    6132 round_trippers.go:473]     Content-Type: application/json
	I0603 05:23:20.135344    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:20.157547    6132 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0603 05:23:20.157547    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:20.157624    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:20 GMT
	I0603 05:23:20.157624    6132 round_trippers.go:580]     Audit-Id: 8dced6bd-50f7-4665-a6ae-493db581ceb2
	I0603 05:23:20.157624    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:20.157624    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:20.157624    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:20.157624    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:20.157624    6132 round_trippers.go:580]     Content-Length: 291
	I0603 05:23:20.157711    6132 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ceff0b29-f826-4fa1-b094-52b37decec6e","resourceVersion":"361","creationTimestamp":"2024-06-03T12:23:04Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0603 05:23:20.614620    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:20.614900    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:20.614900    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:20.615019    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:20.614900    6132 round_trippers.go:463] GET https://172.17.87.47:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0603 05:23:20.615115    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:20.615166    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:20.615166    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:20.632659    6132 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0603 05:23:20.632745    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:20.632745    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:20.632745    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:20.632745    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:20 GMT
	I0603 05:23:20.632745    6132 round_trippers.go:580]     Audit-Id: 698fad24-6d6a-4e82-8db8-3c92d2a3a7cc
	I0603 05:23:20.632745    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:20.632745    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:20.633052    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:20.635635    6132 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0603 05:23:20.635635    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:20.635635    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:20.635635    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:20.635635    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:20.635635    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:20.635635    6132 round_trippers.go:580]     Content-Length: 291
	I0603 05:23:20.635635    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:20 GMT
	I0603 05:23:20.635635    6132 round_trippers.go:580]     Audit-Id: d3180bdb-c99e-43be-9d89-7326e380af1b
	I0603 05:23:20.635635    6132 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ceff0b29-f826-4fa1-b094-52b37decec6e","resourceVersion":"373","creationTimestamp":"2024-06-03T12:23:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0603 05:23:20.636640    6132 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-316400" context rescaled to 1 replicas
	I0603 05:23:21.122745    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:21.122745    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:21.122843    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:21.122843    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:21.131540    6132 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:23:21.132569    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:21.132569    6132 round_trippers.go:580]     Audit-Id: 509d949c-37f2-4d3c-ad9f-6bdc826982b8
	I0603 05:23:21.132569    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:21.132569    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:21.132569    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:21.132569    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:21.132569    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:21 GMT
	I0603 05:23:21.132569    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:21.477941    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:23:21.477941    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:21.479318    6132 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:23:21.479481    6132 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.87.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:23:21.480714    6132 addons.go:234] Setting addon default-storageclass=true in "multinode-316400"
	I0603 05:23:21.480795    6132 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:23:21.481770    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:23:21.492146    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:23:21.492146    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:21.495155    6132 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 05:23:21.497763    6132 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 05:23:21.497763    6132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 05:23:21.497763    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:23:21.618648    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:21.618766    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:21.618766    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:21.618766    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:21.623122    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:21.623122    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:21.623789    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:21 GMT
	I0603 05:23:21.623789    6132 round_trippers.go:580]     Audit-Id: bc39cb0d-fcc6-4e27-88ee-2d4cd11968b5
	I0603 05:23:21.623789    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:21.623789    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:21.623789    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:21.623789    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:21.624229    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:22.114759    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:22.114759    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:22.114759    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:22.114759    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:22.118361    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:22.118361    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:22.118361    6132 round_trippers.go:580]     Audit-Id: 7efbbce8-8471-4f4c-bd1b-48d4c6b61b51
	I0603 05:23:22.118361    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:22.118361    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:22.118361    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:22.118361    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:22.118361    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:22 GMT
	I0603 05:23:22.119354    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:22.119354    6132 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:23:22.623160    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:22.623251    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:22.623251    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:22.623340    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:22.628578    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:23:22.628578    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:22.628578    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:22.628578    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:22.628800    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:22.628800    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:22 GMT
	I0603 05:23:22.628800    6132 round_trippers.go:580]     Audit-Id: 1cadf699-7b22-48f0-b3e4-6d576eee4e33
	I0603 05:23:22.628800    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:22.629138    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:23.118188    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:23.118292    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:23.118363    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:23.118363    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:23.121942    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:23.121942    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:23.121942    6132 round_trippers.go:580]     Audit-Id: a10c6c8f-330f-4f41-be70-3750180b440b
	I0603 05:23:23.121942    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:23.121942    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:23.121942    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:23.121942    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:23.121942    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:23 GMT
	I0603 05:23:23.121942    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:23.621848    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:23.621848    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:23.621848    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:23.621848    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:23.625846    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:23.625846    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:23.625846    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:23 GMT
	I0603 05:23:23.625846    6132 round_trippers.go:580]     Audit-Id: 49dd00b5-392d-4680-a587-cc3e2a36e32e
	I0603 05:23:23.625846    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:23.625846    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:23.625846    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:23.625846    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:23.626859    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:23.798402    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:23:23.798520    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:23.798520    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:23:23.895597    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:23:23.896602    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:23.896704    6132 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 05:23:23.896790    6132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 05:23:23.896832    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:23:24.125215    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:24.125313    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:24.125313    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:24.125313    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:24.129742    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:24.130760    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:24.130760    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:24.130798    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:24.130798    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:24.130798    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:24.130798    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:24 GMT
	I0603 05:23:24.130798    6132 round_trippers.go:580]     Audit-Id: b2612791-08c9-4edb-a57b-94c32a7d84c7
	I0603 05:23:24.131111    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:24.131750    6132 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:23:24.615666    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:24.615866    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:24.615866    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:24.615866    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:24.619381    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:24.619782    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:24.619782    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:24.619782    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:24.619782    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:24 GMT
	I0603 05:23:24.619782    6132 round_trippers.go:580]     Audit-Id: d0289dc5-9764-45e0-bfa0-54b88c8662ea
	I0603 05:23:24.619782    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:24.619782    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:24.621864    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:25.123555    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:25.123555    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:25.123555    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:25.123555    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:25.127636    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:25.128226    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:25.128397    6132 round_trippers.go:580]     Audit-Id: 8a797edb-15b7-4c18-81ab-8208dcb579aa
	I0603 05:23:25.128397    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:25.128397    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:25.128397    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:25.128397    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:25.128491    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:25 GMT
	I0603 05:23:25.129058    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:25.619367    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:25.619367    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:25.619367    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:25.619367    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:25.623259    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:25.623259    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:25.623259    6132 round_trippers.go:580]     Audit-Id: 5e49983e-70cf-4eb0-a2e0-a41370864020
	I0603 05:23:25.623259    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:25.623259    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:25.623259    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:25.623537    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:25.623537    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:25 GMT
	I0603 05:23:25.623715    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:26.125451    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:26.125717    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:26.125717    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:26.125717    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:26.189802    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:23:26.189802    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:26.190037    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:23:26.274977    6132 round_trippers.go:574] Response Status: 200 OK in 149 milliseconds
	I0603 05:23:26.459694    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:26.459694    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:26.459694    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:26.459694    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:26.459694    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:26 GMT
	I0603 05:23:26.459694    6132 round_trippers.go:580]     Audit-Id: 749d832e-3575-48b0-8b44-91843714a2b7
	I0603 05:23:26.459694    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:26.459694    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:26.460977    6132 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:23:26.522038    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:23:26.522038    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:26.523126    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:23:26.613018    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:26.613308    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:26.613308    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:26.613308    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:26.618410    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:23:26.618410    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:26.618410    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:26.618410    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:26.618410    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:26.618410    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:26.618410    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:26 GMT
	I0603 05:23:26.618410    6132 round_trippers.go:580]     Audit-Id: 49bbeaa2-ab6c-4f6d-a439-3bd9a868d8f7
	I0603 05:23:26.620806    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:26.674637    6132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 05:23:27.120308    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:27.120308    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:27.120308    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:27.120308    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:27.130705    6132 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 05:23:27.130705    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:27.131244    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:27.131244    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:27.131244    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:27.131244    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:27.131244    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:27 GMT
	I0603 05:23:27.131244    6132 round_trippers.go:580]     Audit-Id: 2b4c065a-3e9c-4eec-bbae-8f939e286f2d
	I0603 05:23:27.131744    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:27.286584    6132 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0603 05:23:27.286673    6132 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0603 05:23:27.286673    6132 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0603 05:23:27.286673    6132 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0603 05:23:27.286673    6132 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0603 05:23:27.286817    6132 command_runner.go:130] > pod/storage-provisioner created
	I0603 05:23:27.623770    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:27.623877    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:27.623877    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:27.623877    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:27.627201    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:27.627201    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:27.627201    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:27.627201    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:27.627201    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:27.627201    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:27.627201    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:27 GMT
	I0603 05:23:27.627201    6132 round_trippers.go:580]     Audit-Id: 8ea54e66-8136-4a8d-90e4-8bbe873ba9ac
	I0603 05:23:27.628193    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:28.116814    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:28.117173    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:28.117173    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:28.117173    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:28.121929    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:28.122554    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:28.122554    6132 round_trippers.go:580]     Audit-Id: 40e41d7d-bb77-4a45-a48a-29bb5bdfe85d
	I0603 05:23:28.122554    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:28.122554    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:28.122554    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:28.122554    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:28.122554    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:28 GMT
	I0603 05:23:28.123111    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:28.619847    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:28.620147    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:28.620147    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:28.620147    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:28.623488    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:28.623488    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:28.623488    6132 round_trippers.go:580]     Audit-Id: 16bd1f79-5dfa-4f1f-995a-6467ed423924
	I0603 05:23:28.623488    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:28.623488    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:28.624477    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:28.624477    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:28.624477    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:28 GMT
	I0603 05:23:28.624859    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:28.625322    6132 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:23:28.853461    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:23:28.853461    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:28.853957    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:23:28.988627    6132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 05:23:29.124924    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:29.124924    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:29.124924    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:29.124924    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:29.128933    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:29.128933    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:29.129648    6132 round_trippers.go:580]     Audit-Id: b051a56e-e3f2-45e2-a835-a350dbd61e05
	I0603 05:23:29.129648    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:29.129648    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:29.129648    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:29.129648    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:29.129648    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:29 GMT
	I0603 05:23:29.129856    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:29.134004    6132 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0603 05:23:29.134281    6132 round_trippers.go:463] GET https://172.17.87.47:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 05:23:29.134281    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:29.134281    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:29.134367    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:29.145146    6132 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 05:23:29.145796    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:29.145796    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:29.145796    6132 round_trippers.go:580]     Content-Length: 1273
	I0603 05:23:29.145796    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:29 GMT
	I0603 05:23:29.145796    6132 round_trippers.go:580]     Audit-Id: cc4af9eb-4d26-4cab-a401-e42d8885fca4
	I0603 05:23:29.145796    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:29.145796    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:29.145881    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:29.146049    6132 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"standard","uid":"6bcc8bff-b0ed-4e22-838d-9c90dd17b0fc","resourceVersion":"399","creationTimestamp":"2024-06-03T12:23:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T12:23:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0603 05:23:29.146571    6132 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6bcc8bff-b0ed-4e22-838d-9c90dd17b0fc","resourceVersion":"399","creationTimestamp":"2024-06-03T12:23:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T12:23:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 05:23:29.146653    6132 round_trippers.go:463] PUT https://172.17.87.47:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 05:23:29.146653    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:29.146653    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:29.146653    6132 round_trippers.go:473]     Content-Type: application/json
	I0603 05:23:29.146653    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:29.149886    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:29.149886    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:29.149886    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:29 GMT
	I0603 05:23:29.150529    6132 round_trippers.go:580]     Audit-Id: 45491702-6c18-474b-879d-29901159628d
	I0603 05:23:29.150529    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:29.150529    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:29.150529    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:29.150529    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:29.150592    6132 round_trippers.go:580]     Content-Length: 1220
	I0603 05:23:29.150592    6132 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6bcc8bff-b0ed-4e22-838d-9c90dd17b0fc","resourceVersion":"399","creationTimestamp":"2024-06-03T12:23:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T12:23:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 05:23:29.155828    6132 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 05:23:29.157728    6132 addons.go:510] duration metric: took 10.008987s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 05:23:29.623283    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:29.623366    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:29.623366    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:29.623366    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:29.629831    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:23:29.629831    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:29.629831    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:29.629831    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:29.629831    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:29.629831    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:29 GMT
	I0603 05:23:29.629831    6132 round_trippers.go:580]     Audit-Id: c3b43287-c5b9-4972-82a0-ee8b83e1b458
	I0603 05:23:29.629831    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:29.630039    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:30.122953    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:30.123094    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:30.123094    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:30.123094    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:30.127410    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:30.127410    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:30.127410    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:30.127410    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:30.128048    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:30.128048    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:30.128048    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:30 GMT
	I0603 05:23:30.128048    6132 round_trippers.go:580]     Audit-Id: 966291ee-d224-4a05-abd8-a60a0876703d
	I0603 05:23:30.129369    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:30.618812    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:30.619006    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:30.619006    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:30.619113    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:30.622387    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:30.622387    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:30.622387    6132 round_trippers.go:580]     Audit-Id: 3b3b5546-3189-498c-aae5-9b17841c9688
	I0603 05:23:30.622387    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:30.622387    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:30.622387    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:30.622387    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:30.622387    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:30 GMT
	I0603 05:23:30.623516    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"317","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0603 05:23:31.119262    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:31.119262    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:31.119262    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:31.119262    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:31.123263    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:31.123263    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:31.123995    6132 round_trippers.go:580]     Audit-Id: 14f11552-aefc-45e7-a746-548142949513
	I0603 05:23:31.123995    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:31.123995    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:31.123995    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:31.123995    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:31.123995    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:31 GMT
	I0603 05:23:31.124151    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:31.124756    6132 node_ready.go:49] node "multinode-316400" has status "Ready":"True"
	I0603 05:23:31.124756    6132 node_ready.go:38] duration metric: took 11.0144202s for node "multinode-316400" to be "Ready" ...
	I0603 05:23:31.124756    6132 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:23:31.124756    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods
	I0603 05:23:31.124756    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:31.124756    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:31.124756    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:31.128339    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:31.128339    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:31.128339    6132 round_trippers.go:580]     Audit-Id: f3f4aaea-3bb8-46fe-bc72-a04b55d71a38
	I0603 05:23:31.128339    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:31.128339    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:31.128339    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:31.128339    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:31.128339    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:31 GMT
	I0603 05:23:31.133327    6132 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"407","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0603 05:23:31.138132    6132 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:31.138295    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:23:31.138295    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:31.138295    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:31.138295    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:31.141761    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:31.141761    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:31.141830    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:31.141830    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:31.141830    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:31.141830    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:31 GMT
	I0603 05:23:31.141872    6132 round_trippers.go:580]     Audit-Id: 27769bd5-207a-4ff5-8b7a-e5488526f047
	I0603 05:23:31.141910    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:31.141974    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"407","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0603 05:23:31.141974    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:31.141974    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:31.141974    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:31.141974    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:31.144314    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:23:31.145327    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:31.145327    6132 round_trippers.go:580]     Audit-Id: 1163071b-9f69-47ae-aa12-fa93593e3d93
	I0603 05:23:31.145379    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:31.145379    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:31.145379    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:31.145379    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:31.145379    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:31 GMT
	I0603 05:23:31.145787    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:31.643245    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:23:31.643335    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:31.643335    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:31.643335    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:31.647702    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:31.647889    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:31.647889    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:31.647889    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:31.647889    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:31.647889    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:31.647889    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:31 GMT
	I0603 05:23:31.647972    6132 round_trippers.go:580]     Audit-Id: 2f4be234-f756-41bc-9e94-9db1a573ae9a
	I0603 05:23:31.648208    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"407","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0603 05:23:31.648764    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:31.648764    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:31.648764    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:31.648764    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:31.656361    6132 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:23:31.656361    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:31.656361    6132 round_trippers.go:580]     Audit-Id: 7d094458-81ee-42af-a4ce-193166bd166f
	I0603 05:23:31.656757    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:31.656757    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:31.656757    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:31.656757    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:31.656757    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:31 GMT
	I0603 05:23:31.657084    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:32.149751    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:23:32.149812    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:32.149812    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:32.149812    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:32.153196    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:32.153196    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:32.153196    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:32.154026    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:32.154026    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:32.154026    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:32.154026    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:32 GMT
	I0603 05:23:32.154026    6132 round_trippers.go:580]     Audit-Id: 82fc1802-1045-4006-b364-f615e4614c05
	I0603 05:23:32.155142    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"407","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0603 05:23:32.155868    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:32.155868    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:32.155941    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:32.155941    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:32.158551    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:23:32.159097    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:32.159097    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:32 GMT
	I0603 05:23:32.159097    6132 round_trippers.go:580]     Audit-Id: 5f3871fe-cddc-4a60-a04e-4f4bed91516e
	I0603 05:23:32.159097    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:32.159097    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:32.159156    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:32.159156    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:32.159156    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:32.639479    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:23:32.639537    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:32.639632    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:32.639632    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:32.645378    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:23:32.645378    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:32.645378    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:32.645378    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:32.645444    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:32 GMT
	I0603 05:23:32.645444    6132 round_trippers.go:580]     Audit-Id: 149b6952-94ce-4a0c-82bf-fe131bc1cadf
	I0603 05:23:32.645444    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:32.645444    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:32.645630    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"407","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0603 05:23:32.646356    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:32.646356    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:32.646554    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:32.646554    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:32.650300    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:32.650300    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:32.650300    6132 round_trippers.go:580]     Audit-Id: 35f410a9-9609-4f5c-b2b2-363ce45fb7bd
	I0603 05:23:32.650300    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:32.650300    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:32.650300    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:32.650300    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:32.650300    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:32 GMT
	I0603 05:23:32.650838    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.140320    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:23:33.140609    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.140609    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.140609    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.144080    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:33.144942    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.144942    6132 round_trippers.go:580]     Audit-Id: 436d3278-a671-4879-aee6-1226df588194
	I0603 05:23:33.144942    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.144942    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.144942    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.144942    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.144942    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.145141    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"422","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0603 05:23:33.145699    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.145699    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.145699    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.145699    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.151157    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:23:33.151205    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.151243    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.151243    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.151243    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.151277    6132 round_trippers.go:580]     Audit-Id: f9609b57-34c1-4f86-b604-4b8146be87f3
	I0603 05:23:33.151277    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.151277    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.151299    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.152078    6132 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"True"
	I0603 05:23:33.152078    6132 pod_ready.go:81] duration metric: took 2.0139173s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.152078    6132 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.152078    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:23:33.152078    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.152078    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.152078    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.155689    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:33.155689    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.155689    6132 round_trippers.go:580]     Audit-Id: a4b3bdfc-0c0f-473e-b1d6-9e9565f86cc2
	I0603 05:23:33.155689    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.155689    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.155689    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.155689    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.155689    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.155689    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"5a3b396d-1240-4c67-b2f5-e5664e068bfe","resourceVersion":"383","creationTimestamp":"2024-06-03T12:23:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.87.47:2379","kubernetes.io/config.hash":"b79ce6c8ebbce53597babbe73b1962c9","kubernetes.io/config.mirror":"b79ce6c8ebbce53597babbe73b1962c9","kubernetes.io/config.seen":"2024-06-03T12:22:56.267029490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0603 05:23:33.155689    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.155689    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.155689    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.155689    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.162203    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:23:33.162203    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.162203    6132 round_trippers.go:580]     Audit-Id: 631ad297-86d7-4d72-943c-24a86cbef711
	I0603 05:23:33.162203    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.162203    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.162203    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.162203    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.162351    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.162670    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.163116    6132 pod_ready.go:92] pod "etcd-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:23:33.163116    6132 pod_ready.go:81] duration metric: took 11.0374ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.163116    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.163255    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:23:33.163255    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.163255    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.163309    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.165184    6132 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:23:33.165184    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.165184    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.165184    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.165184    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.165184    6132 round_trippers.go:580]     Audit-Id: 8567e498-c04e-448d-940e-5e715de312df
	I0603 05:23:33.165184    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.165184    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.166191    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"0cdcee20-9dca-4eca-b92f-a7214368dd5e","resourceVersion":"381","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.87.47:8443","kubernetes.io/config.hash":"171c5f025e4267e9949ddac2f1863980","kubernetes.io/config.mirror":"171c5f025e4267e9949ddac2f1863980","kubernetes.io/config.seen":"2024-06-03T12:22:56.267035289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0603 05:23:33.166191    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.166191    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.166191    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.166191    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.169683    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:33.170407    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.170407    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.170407    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.170407    6132 round_trippers.go:580]     Audit-Id: 832a0a67-83f3-4c57-8fd1-084fae9dcf54
	I0603 05:23:33.170473    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.170473    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.170473    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.170832    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.171261    6132 pod_ready.go:92] pod "kube-apiserver-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:23:33.171261    6132 pod_ready.go:81] duration metric: took 8.1453ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.171261    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.171368    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:23:33.171368    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.171368    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.171368    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.176120    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:33.176120    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.176208    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.176208    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.176208    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.176208    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.176208    6132 round_trippers.go:580]     Audit-Id: ec424b5d-f498-490b-8abc-7802007e0c15
	I0603 05:23:33.176208    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.176513    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"384","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0603 05:23:33.176513    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.177047    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.177047    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.177047    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.185538    6132 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:23:33.185538    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.185538    6132 round_trippers.go:580]     Audit-Id: 8f44d987-132d-4060-8aa4-e8f26ca79f3d
	I0603 05:23:33.185538    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.185538    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.185538    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.185538    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.185538    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.186294    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.186885    6132 pod_ready.go:92] pod "kube-controller-manager-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:23:33.186950    6132 pod_ready.go:81] duration metric: took 15.6154ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.186950    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.187026    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:23:33.187096    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.187096    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.187096    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.189366    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:23:33.189366    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.189366    6132 round_trippers.go:580]     Audit-Id: f223f494-d320-4dd4-be96-85292419f03d
	I0603 05:23:33.189366    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.189366    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.190182    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.190182    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.190182    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.190362    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"376","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0603 05:23:33.190802    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.190802    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.190802    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.190802    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.193416    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:23:33.193416    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.193416    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.193416    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.193416    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.193416    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.193416    6132 round_trippers.go:580]     Audit-Id: dbd1a27f-f4dc-4efa-b7fc-5cebf3a5a89b
	I0603 05:23:33.193416    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.194175    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.195035    6132 pod_ready.go:92] pod "kube-proxy-ks64x" in "kube-system" namespace has status "Ready":"True"
	I0603 05:23:33.195105    6132 pod_ready.go:81] duration metric: took 8.0845ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.195105    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.342170    6132 request.go:629] Waited for 146.8392ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:23:33.342345    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:23:33.342345    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.342345    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.342345    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.347216    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:33.347216    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.347216    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.347216    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.347562    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.347562    6132 round_trippers.go:580]     Audit-Id: 8804ff0c-3d08-4c4a-bd7f-d27dc01b127e
	I0603 05:23:33.347562    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.347562    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.347767    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"382","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0603 05:23:33.546525    6132 request.go:629] Waited for 198.4293ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.546853    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:23:33.546853    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.546853    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.546853    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.551448    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:33.551448    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.551928    6132 round_trippers.go:580]     Audit-Id: a3f7fda9-1b11-4b99-a492-7c1e63898cba
	I0603 05:23:33.552131    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.552131    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.552131    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.552131    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.552131    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.552921    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0603 05:23:33.553151    6132 pod_ready.go:92] pod "kube-scheduler-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:23:33.553151    6132 pod_ready.go:81] duration metric: took 358.0456ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:23:33.553151    6132 pod_ready.go:38] duration metric: took 2.428387s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:23:33.553151    6132 api_server.go:52] waiting for apiserver process to appear ...
	I0603 05:23:33.565586    6132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:23:33.592589    6132 command_runner.go:130] > 2014
	I0603 05:23:33.592643    6132 api_server.go:72] duration metric: took 14.4437463s to wait for apiserver process to appear ...
	I0603 05:23:33.592750    6132 api_server.go:88] waiting for apiserver healthz status ...
	I0603 05:23:33.592750    6132 api_server.go:253] Checking apiserver healthz at https://172.17.87.47:8443/healthz ...
	I0603 05:23:33.602296    6132 api_server.go:279] https://172.17.87.47:8443/healthz returned 200:
	ok
	I0603 05:23:33.602498    6132 round_trippers.go:463] GET https://172.17.87.47:8443/version
	I0603 05:23:33.602603    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.602603    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.602603    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.603657    6132 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:23:33.604503    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.604503    6132 round_trippers.go:580]     Content-Length: 263
	I0603 05:23:33.604503    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.604503    6132 round_trippers.go:580]     Audit-Id: 862ff48f-5c10-4d06-bcea-50275b706755
	I0603 05:23:33.604503    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.604503    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.604503    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.604503    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.604503    6132 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 05:23:33.604638    6132 api_server.go:141] control plane version: v1.30.1
	I0603 05:23:33.604747    6132 api_server.go:131] duration metric: took 11.9968ms to wait for apiserver health ...
	I0603 05:23:33.604747    6132 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 05:23:33.748367    6132 request.go:629] Waited for 143.2129ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods
	I0603 05:23:33.748647    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods
	I0603 05:23:33.748647    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.748745    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.748745    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.755254    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:23:33.755254    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.755254    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.755254    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.755254    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.755254    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.755536    6132 round_trippers.go:580]     Audit-Id: fd0fc851-e71e-4cbc-82b1-62ed0cd72b51
	I0603 05:23:33.755536    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.758668    6132 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"422","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0603 05:23:33.761909    6132 system_pods.go:59] 8 kube-system pods found
	I0603 05:23:33.761909    6132 system_pods.go:61] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "etcd-multinode-316400" [5a3b396d-1240-4c67-b2f5-e5664e068bfe] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "kube-apiserver-multinode-316400" [0cdcee20-9dca-4eca-b92f-a7214368dd5e] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running
	I0603 05:23:33.761909    6132 system_pods.go:61] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running
	I0603 05:23:33.761909    6132 system_pods.go:74] duration metric: took 157.1617ms to wait for pod list to return data ...
	I0603 05:23:33.761909    6132 default_sa.go:34] waiting for default service account to be created ...
	I0603 05:23:33.949849    6132 request.go:629] Waited for 187.7807ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/default/serviceaccounts
	I0603 05:23:33.950153    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/default/serviceaccounts
	I0603 05:23:33.950219    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:33.950219    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:33.950219    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:33.954897    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:23:33.954946    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:33.954946    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:33.954946    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:33.954946    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:33.954946    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:33.954946    6132 round_trippers.go:580]     Content-Length: 261
	I0603 05:23:33.954946    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:33 GMT
	I0603 05:23:33.954946    6132 round_trippers.go:580]     Audit-Id: b58ac72f-70c9-4eb2-99d6-67f8f24ac2bd
	I0603 05:23:33.955029    6132 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"995f775d-e30c-4872-957a-b91ade4bf666","resourceVersion":"318","creationTimestamp":"2024-06-03T12:23:18Z"}}]}
	I0603 05:23:33.955135    6132 default_sa.go:45] found service account: "default"
	I0603 05:23:33.955135    6132 default_sa.go:55] duration metric: took 193.2254ms for default service account to be created ...
	I0603 05:23:33.955135    6132 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 05:23:34.153288    6132 request.go:629] Waited for 197.9641ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods
	I0603 05:23:34.153544    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods
	I0603 05:23:34.153544    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:34.153544    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:34.153544    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:34.161081    6132 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:23:34.161081    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:34.161081    6132 round_trippers.go:580]     Audit-Id: 45c15e5d-6b93-4a99-81f4-54f833e72d0f
	I0603 05:23:34.161081    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:34.161081    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:34.161228    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:34.161228    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:34.161228    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:34 GMT
	I0603 05:23:34.161298    6132 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"422","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0603 05:23:34.165352    6132 system_pods.go:86] 8 kube-system pods found
	I0603 05:23:34.165466    6132 system_pods.go:89] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running
	I0603 05:23:34.165466    6132 system_pods.go:89] "etcd-multinode-316400" [5a3b396d-1240-4c67-b2f5-e5664e068bfe] Running
	I0603 05:23:34.165466    6132 system_pods.go:89] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running
	I0603 05:23:34.165466    6132 system_pods.go:89] "kube-apiserver-multinode-316400" [0cdcee20-9dca-4eca-b92f-a7214368dd5e] Running
	I0603 05:23:34.165560    6132 system_pods.go:89] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running
	I0603 05:23:34.165560    6132 system_pods.go:89] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running
	I0603 05:23:34.165560    6132 system_pods.go:89] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running
	I0603 05:23:34.165560    6132 system_pods.go:89] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running
	I0603 05:23:34.165560    6132 system_pods.go:126] duration metric: took 210.4241ms to wait for k8s-apps to be running ...
	I0603 05:23:34.165634    6132 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 05:23:34.178405    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:23:34.202997    6132 system_svc.go:56] duration metric: took 37.2403ms WaitForService to wait for kubelet
	I0603 05:23:34.203029    6132 kubeadm.go:576] duration metric: took 15.0541304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:23:34.203121    6132 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:23:34.355504    6132 request.go:629] Waited for 152.0303ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/nodes
	I0603 05:23:34.355674    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes
	I0603 05:23:34.355674    6132 round_trippers.go:469] Request Headers:
	I0603 05:23:34.355763    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:23:34.355763    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:23:34.359239    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:23:34.359239    6132 round_trippers.go:577] Response Headers:
	I0603 05:23:34.359239    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:23:34.359856    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:23:34.359856    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:23:34.359856    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:23:34.359856    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:23:34 GMT
	I0603 05:23:34.359856    6132 round_trippers.go:580]     Audit-Id: 909d8b28-6763-418b-b474-c360af754f58
	I0603 05:23:34.360168    6132 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"402","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0603 05:23:34.360678    6132 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:23:34.360678    6132 node_conditions.go:123] node cpu capacity is 2
	I0603 05:23:34.360678    6132 node_conditions.go:105] duration metric: took 157.5566ms to run NodePressure ...
	I0603 05:23:34.360678    6132 start.go:240] waiting for startup goroutines ...
	I0603 05:23:34.360678    6132 start.go:245] waiting for cluster config update ...
	I0603 05:23:34.360678    6132 start.go:254] writing updated cluster config ...
	I0603 05:23:34.364784    6132 out.go:177] 
	I0603 05:23:34.368973    6132 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:23:34.376159    6132 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:23:34.376159    6132 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:23:34.381896    6132 out.go:177] * Starting "multinode-316400-m02" worker node in "multinode-316400" cluster
	I0603 05:23:34.385461    6132 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:23:34.385461    6132 cache.go:56] Caching tarball of preloaded images
	I0603 05:23:34.386637    6132 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:23:34.386855    6132 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:23:34.387025    6132 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:23:34.391934    6132 start.go:360] acquireMachinesLock for multinode-316400-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:23:34.391934    6132 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-316400-m02"
	I0603 05:23:34.391934    6132 start.go:93] Provisioning new machine with config: &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:23:34.391934    6132 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0603 05:23:34.395015    6132 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 05:23:34.395015    6132 start.go:159] libmachine.API.Create for "multinode-316400" (driver="hyperv")
	I0603 05:23:34.395015    6132 client.go:168] LocalClient.Create starting
	I0603 05:23:34.395728    6132 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0603 05:23:34.395728    6132 main.go:141] libmachine: Decoding PEM data...
	I0603 05:23:34.395728    6132 main.go:141] libmachine: Parsing certificate...
	I0603 05:23:34.395728    6132 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0603 05:23:34.395728    6132 main.go:141] libmachine: Decoding PEM data...
	I0603 05:23:34.395728    6132 main.go:141] libmachine: Parsing certificate...
	I0603 05:23:34.395728    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 05:23:36.316185    6132 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 05:23:36.316246    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:36.316246    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 05:23:38.033342    6132 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 05:23:38.033636    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:38.033742    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 05:23:39.505276    6132 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 05:23:39.505276    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:39.505276    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 05:23:43.217686    6132 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 05:23:43.217686    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:43.219770    6132 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 05:23:43.693323    6132 main.go:141] libmachine: Creating SSH key...
	I0603 05:23:44.053155    6132 main.go:141] libmachine: Creating VM...
	I0603 05:23:44.053155    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 05:23:47.062424    6132 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 05:23:47.063481    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:47.063606    6132 main.go:141] libmachine: Using switch "Default Switch"
	I0603 05:23:47.063706    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 05:23:48.938233    6132 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 05:23:48.938233    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:48.938233    6132 main.go:141] libmachine: Creating VHD
	I0603 05:23:48.938233    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 05:23:52.829614    6132 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4764A049-E2F0-4804-9A8B-E95BBF78AB90
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 05:23:52.829614    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:52.829614    6132 main.go:141] libmachine: Writing magic tar header
	I0603 05:23:52.829614    6132 main.go:141] libmachine: Writing SSH key tar header
	I0603 05:23:52.841471    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 05:23:56.053662    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:23:56.053662    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:56.053662    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\disk.vhd' -SizeBytes 20000MB
	I0603 05:23:58.626656    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:23:58.626656    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:23:58.627742    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-316400-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 05:24:02.312857    6132 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-316400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 05:24:02.312972    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:02.312972    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-316400-m02 -DynamicMemoryEnabled $false
	I0603 05:24:04.583042    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:04.583042    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:04.583042    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-316400-m02 -Count 2
	I0603 05:24:06.788369    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:06.789184    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:06.789184    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-316400-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\boot2docker.iso'
	I0603 05:24:09.418939    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:09.418939    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:09.418939    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-316400-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\disk.vhd'
	I0603 05:24:12.141727    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:12.141727    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:12.142472    6132 main.go:141] libmachine: Starting VM...
	I0603 05:24:12.142472    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400-m02
	I0603 05:24:15.285141    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:15.285141    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:15.285141    6132 main.go:141] libmachine: Waiting for host to start...
	I0603 05:24:15.285776    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:17.643629    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:17.643629    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:17.643739    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:20.262857    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:20.262857    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:21.268452    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:23.541761    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:23.541761    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:23.542343    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:26.221668    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:26.221748    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:27.232434    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:29.561293    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:29.561293    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:29.562910    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:32.154782    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:32.154982    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:33.170240    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:35.397468    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:35.397468    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:35.397564    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:37.971710    6132 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:24:37.971710    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:38.987353    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:41.269987    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:41.271064    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:41.271064    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:43.828713    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:24:43.828713    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:43.828810    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:45.996670    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:45.996670    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:45.996670    6132 machine.go:94] provisionDockerMachine start ...
	I0603 05:24:45.997321    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:48.215741    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:48.215879    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:48.215879    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:50.771731    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:24:50.771792    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:50.777474    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:24:50.787903    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:24:50.787903    6132 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 05:24:50.931569    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 05:24:50.931644    6132 buildroot.go:166] provisioning hostname "multinode-316400-m02"
	I0603 05:24:50.931822    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:53.082209    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:53.082209    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:53.082209    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:24:55.677144    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:24:55.677784    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:55.682892    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:24:55.683300    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:24:55.683300    6132 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-316400-m02 && echo "multinode-316400-m02" | sudo tee /etc/hostname
	I0603 05:24:55.846153    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-316400-m02
	
	I0603 05:24:55.846302    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:24:57.987688    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:24:57.987688    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:24:57.988373    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:00.549836    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:00.550063    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:00.555709    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:25:00.556463    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:25:00.556463    6132 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-316400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-316400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-316400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 05:25:00.706623    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 05:25:00.706676    6132 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 05:25:00.706790    6132 buildroot.go:174] setting up certificates
	I0603 05:25:00.706836    6132 provision.go:84] configureAuth start
	I0603 05:25:00.706934    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:02.919997    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:02.919997    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:02.920909    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:05.490184    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:05.490184    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:05.490538    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:07.647268    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:07.647268    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:07.647268    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:10.191928    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:10.191928    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:10.191928    6132 provision.go:143] copyHostCerts
	I0603 05:25:10.191928    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 05:25:10.191928    6132 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 05:25:10.191928    6132 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 05:25:10.191928    6132 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 05:25:10.194325    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 05:25:10.194538    6132 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 05:25:10.194657    6132 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 05:25:10.194950    6132 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 05:25:10.196221    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 05:25:10.196585    6132 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 05:25:10.196585    6132 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 05:25:10.197017    6132 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 05:25:10.198039    6132 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-316400-m02 san=[127.0.0.1 172.17.94.201 localhost minikube multinode-316400-m02]
	I0603 05:25:10.413784    6132 provision.go:177] copyRemoteCerts
	I0603 05:25:10.425779    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 05:25:10.425779    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:12.594758    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:12.594758    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:12.594758    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:15.147540    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:15.147540    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:15.148279    6132 sshutil.go:53] new ssh client: &{IP:172.17.94.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:25:15.250516    6132 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8247209s)
	I0603 05:25:15.250654    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 05:25:15.251230    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 05:25:15.302945    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 05:25:15.303184    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0603 05:25:15.348502    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 05:25:15.348922    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 05:25:15.398763    6132 provision.go:87] duration metric: took 14.6918374s to configureAuth
	I0603 05:25:15.398852    6132 buildroot.go:189] setting minikube options for container-runtime
	I0603 05:25:15.399316    6132 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:25:15.399316    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:17.540037    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:17.540037    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:17.540118    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:20.081566    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:20.081931    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:20.086719    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:25:20.087467    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:25:20.087467    6132 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 05:25:20.235304    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 05:25:20.235364    6132 buildroot.go:70] root file system type: tmpfs
	I0603 05:25:20.235616    6132 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 05:25:20.235616    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:22.407504    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:22.408564    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:22.408606    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:25.008055    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:25.008055    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:25.013530    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:25:25.013754    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:25:25.013754    6132 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.87.47"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 05:25:25.188788    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.87.47
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 05:25:25.188788    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:27.388742    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:27.388742    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:27.389682    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:29.964564    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:29.964564    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:29.971451    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:25:29.972199    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:25:29.972199    6132 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 05:25:32.100818    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 05:25:32.100905    6132 machine.go:97] duration metric: took 46.1040783s to provisionDockerMachine
	I0603 05:25:32.100905    6132 client.go:171] duration metric: took 1m57.7054902s to LocalClient.Create
	I0603 05:25:32.100905    6132 start.go:167] duration metric: took 1m57.7054902s to libmachine.API.Create "multinode-316400"
	I0603 05:25:32.100905    6132 start.go:293] postStartSetup for "multinode-316400-m02" (driver="hyperv")
	I0603 05:25:32.100905    6132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 05:25:32.112712    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 05:25:32.112712    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:34.274026    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:34.274026    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:34.274026    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:36.827688    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:36.828411    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:36.828580    6132 sshutil.go:53] new ssh client: &{IP:172.17.94.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:25:36.930202    6132 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8174739s)
	I0603 05:25:36.944225    6132 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 05:25:36.950504    6132 command_runner.go:130] > NAME=Buildroot
	I0603 05:25:36.950504    6132 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 05:25:36.950504    6132 command_runner.go:130] > ID=buildroot
	I0603 05:25:36.950504    6132 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 05:25:36.950504    6132 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 05:25:36.950504    6132 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 05:25:36.950504    6132 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 05:25:36.950504    6132 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 05:25:36.952211    6132 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 05:25:36.952321    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 05:25:36.964607    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 05:25:36.986957    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 05:25:37.033259    6132 start.go:296] duration metric: took 4.9323372s for postStartSetup
	I0603 05:25:37.036571    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:39.199153    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:39.199607    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:39.199607    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:41.757919    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:41.757919    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:41.757919    6132 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:25:41.760630    6132 start.go:128] duration metric: took 2m7.3682634s to createHost
	I0603 05:25:41.760741    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:43.898342    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:43.898342    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:43.898342    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:46.449187    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:46.449246    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:46.454515    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:25:46.454515    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:25:46.454515    6132 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 05:25:46.609434    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417546.618159736
	
	I0603 05:25:46.609434    6132 fix.go:216] guest clock: 1717417546.618159736
	I0603 05:25:46.609500    6132 fix.go:229] Guest: 2024-06-03 05:25:46.618159736 -0700 PDT Remote: 2024-06-03 05:25:41.7606307 -0700 PDT m=+340.633671301 (delta=4.857529036s)
	I0603 05:25:46.609576    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:48.749329    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:48.749329    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:48.749562    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:51.366287    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:51.366287    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:51.374415    6132 main.go:141] libmachine: Using SSH client type: native
	I0603 05:25:51.374556    6132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.94.201 22 <nil> <nil>}
	I0603 05:25:51.374556    6132 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717417546
	I0603 05:25:51.536389    6132 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:25:46 UTC 2024
	
	I0603 05:25:51.536389    6132 fix.go:236] clock set: Mon Jun  3 12:25:46 UTC 2024
	 (err=<nil>)
	I0603 05:25:51.536453    6132 start.go:83] releasing machines lock for "multinode-316400-m02", held for 2m17.1440532s
	I0603 05:25:51.536767    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:53.714908    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:53.714908    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:53.715097    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:56.335050    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:25:56.335050    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:56.338148    6132 out.go:177] * Found network options:
	I0603 05:25:56.341043    6132 out.go:177]   - NO_PROXY=172.17.87.47
	W0603 05:25:56.343270    6132 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 05:25:56.345586    6132 out.go:177]   - NO_PROXY=172.17.87.47
	W0603 05:25:56.347569    6132 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 05:25:56.349238    6132 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 05:25:56.353490    6132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 05:25:56.353490    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:56.363560    6132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 05:25:56.363560    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:25:58.610979    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:58.611171    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:58.610979    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:25:58.611171    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:25:58.611171    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:25:58.611171    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:26:01.352472    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:26:01.352472    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:26:01.352472    6132 sshutil.go:53] new ssh client: &{IP:172.17.94.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:26:01.381110    6132 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:26:01.381228    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:26:01.381493    6132 sshutil.go:53] new ssh client: &{IP:172.17.94.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:26:01.455171    6132 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0603 05:26:01.456086    6132 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0925095s)
	W0603 05:26:01.456281    6132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 05:26:01.468445    6132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 05:26:01.567651    6132 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 05:26:01.567728    6132 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 05:26:01.567728    6132 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2142207s)
	I0603 05:26:01.567807    6132 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 05:26:01.567807    6132 start.go:494] detecting cgroup driver to use...
	I0603 05:26:01.567959    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:26:01.612346    6132 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 05:26:01.625745    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 05:26:01.659069    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 05:26:01.679614    6132 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 05:26:01.692389    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 05:26:01.722412    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:26:01.754950    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 05:26:01.785993    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:26:01.816857    6132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 05:26:01.846934    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 05:26:01.878613    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 05:26:01.909752    6132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 05:26:01.940346    6132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 05:26:01.959162    6132 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 05:26:01.972708    6132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 05:26:02.005064    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:26:02.214631    6132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 05:26:02.245983    6132 start.go:494] detecting cgroup driver to use...
	I0603 05:26:02.258144    6132 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 05:26:02.286284    6132 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 05:26:02.286284    6132 command_runner.go:130] > [Unit]
	I0603 05:26:02.286284    6132 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 05:26:02.286284    6132 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 05:26:02.286284    6132 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 05:26:02.286284    6132 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 05:26:02.286284    6132 command_runner.go:130] > StartLimitBurst=3
	I0603 05:26:02.286284    6132 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 05:26:02.286284    6132 command_runner.go:130] > [Service]
	I0603 05:26:02.286284    6132 command_runner.go:130] > Type=notify
	I0603 05:26:02.286284    6132 command_runner.go:130] > Restart=on-failure
	I0603 05:26:02.286284    6132 command_runner.go:130] > Environment=NO_PROXY=172.17.87.47
	I0603 05:26:02.286284    6132 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 05:26:02.286284    6132 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 05:26:02.286284    6132 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 05:26:02.286284    6132 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 05:26:02.286284    6132 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 05:26:02.286284    6132 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 05:26:02.286284    6132 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 05:26:02.286284    6132 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 05:26:02.286284    6132 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 05:26:02.286284    6132 command_runner.go:130] > ExecStart=
	I0603 05:26:02.286284    6132 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 05:26:02.286284    6132 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 05:26:02.286284    6132 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 05:26:02.286284    6132 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 05:26:02.286284    6132 command_runner.go:130] > LimitNOFILE=infinity
	I0603 05:26:02.286284    6132 command_runner.go:130] > LimitNPROC=infinity
	I0603 05:26:02.286284    6132 command_runner.go:130] > LimitCORE=infinity
	I0603 05:26:02.286284    6132 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 05:26:02.286284    6132 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 05:26:02.286284    6132 command_runner.go:130] > TasksMax=infinity
	I0603 05:26:02.286284    6132 command_runner.go:130] > TimeoutStartSec=0
	I0603 05:26:02.286284    6132 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 05:26:02.286284    6132 command_runner.go:130] > Delegate=yes
	I0603 05:26:02.286284    6132 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 05:26:02.286284    6132 command_runner.go:130] > KillMode=process
	I0603 05:26:02.286284    6132 command_runner.go:130] > [Install]
	I0603 05:26:02.286284    6132 command_runner.go:130] > WantedBy=multi-user.target
	I0603 05:26:02.300498    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:26:02.333522    6132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 05:26:02.376977    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:26:02.418869    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:26:02.454619    6132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 05:26:02.514197    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:26:02.537290    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:26:02.572133    6132 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 05:26:02.584787    6132 ssh_runner.go:195] Run: which cri-dockerd
	I0603 05:26:02.592028    6132 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 05:26:02.602290    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 05:26:02.623999    6132 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 05:26:02.666863    6132 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 05:26:02.869410    6132 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 05:26:03.062277    6132 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 05:26:03.062449    6132 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 05:26:03.107669    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:26:03.306129    6132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:26:05.850178    6132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5429701s)
	I0603 05:26:05.862236    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 05:26:05.898533    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:26:05.936226    6132 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 05:26:06.134757    6132 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 05:26:06.352248    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:26:06.544343    6132 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 05:26:06.586860    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:26:06.619683    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:26:06.807293    6132 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 05:26:06.914240    6132 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 05:26:06.927944    6132 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 05:26:06.937958    6132 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 05:26:06.937958    6132 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 05:26:06.938067    6132 command_runner.go:130] > Device: 0,22	Inode: 876         Links: 1
	I0603 05:26:06.938067    6132 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 05:26:06.938067    6132 command_runner.go:130] > Access: 2024-06-03 12:26:06.838604857 +0000
	I0603 05:26:06.938067    6132 command_runner.go:130] > Modify: 2024-06-03 12:26:06.838604857 +0000
	I0603 05:26:06.938067    6132 command_runner.go:130] > Change: 2024-06-03 12:26:06.841604857 +0000
	I0603 05:26:06.938067    6132 command_runner.go:130] >  Birth: -
	I0603 05:26:06.938189    6132 start.go:562] Will wait 60s for crictl version
	I0603 05:26:06.950987    6132 ssh_runner.go:195] Run: which crictl
	I0603 05:26:06.956988    6132 command_runner.go:130] > /usr/bin/crictl
	I0603 05:26:06.968635    6132 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 05:26:07.034557    6132 command_runner.go:130] > Version:  0.1.0
	I0603 05:26:07.034630    6132 command_runner.go:130] > RuntimeName:  docker
	I0603 05:26:07.034630    6132 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 05:26:07.034630    6132 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 05:26:07.034703    6132 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 05:26:07.044613    6132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:26:07.075191    6132 command_runner.go:130] > 26.0.2
	I0603 05:26:07.085871    6132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:26:07.119868    6132 command_runner.go:130] > 26.0.2
	I0603 05:26:07.123432    6132 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 05:26:07.125980    6132 out.go:177]   - env NO_PROXY=172.17.87.47
	I0603 05:26:07.128317    6132 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 05:26:07.132984    6132 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 05:26:07.132984    6132 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 05:26:07.132984    6132 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 05:26:07.132984    6132 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 05:26:07.135361    6132 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 05:26:07.135361    6132 ip.go:210] interface addr: 172.17.80.1/20
	I0603 05:26:07.147785    6132 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 05:26:07.154608    6132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:26:07.175892    6132 mustload.go:65] Loading cluster: multinode-316400
	I0603 05:26:07.176625    6132 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:26:07.177286    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:26:09.358636    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:26:09.359015    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:26:09.359015    6132 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:26:09.359015    6132 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400 for IP: 172.17.94.201
	I0603 05:26:09.359591    6132 certs.go:194] generating shared ca certs ...
	I0603 05:26:09.359591    6132 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:26:09.360175    6132 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 05:26:09.360453    6132 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 05:26:09.360642    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 05:26:09.360751    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 05:26:09.360986    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 05:26:09.361261    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 05:26:09.361666    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 05:26:09.362154    6132 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 05:26:09.362225    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 05:26:09.362225    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 05:26:09.362914    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 05:26:09.363200    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 05:26:09.363269    6132 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 05:26:09.363845    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 05:26:09.363935    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:26:09.364100    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 05:26:09.364388    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 05:26:09.411723    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 05:26:09.456589    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 05:26:09.505025    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 05:26:09.556764    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 05:26:09.600515    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 05:26:09.645321    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 05:26:09.702729    6132 ssh_runner.go:195] Run: openssl version
	I0603 05:26:09.712309    6132 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 05:26:09.724115    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 05:26:09.753865    6132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 05:26:09.761354    6132 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:26:09.761354    6132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:26:09.773389    6132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 05:26:09.781486    6132 command_runner.go:130] > 3ec20f2e
	I0603 05:26:09.795872    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 05:26:09.827141    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 05:26:09.857249    6132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:26:09.863398    6132 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:26:09.863398    6132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:26:09.876055    6132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:26:09.884397    6132 command_runner.go:130] > b5213941
	I0603 05:26:09.896709    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 05:26:09.927211    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 05:26:09.960549    6132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 05:26:09.970624    6132 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:26:09.970709    6132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:26:09.982751    6132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 05:26:09.992208    6132 command_runner.go:130] > 51391683
	I0603 05:26:10.004889    6132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 05:26:10.039105    6132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:26:10.048231    6132 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:26:10.048667    6132 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:26:10.049042    6132 kubeadm.go:928] updating node {m02 172.17.94.201 8443 v1.30.1 docker false true} ...
	I0603 05:26:10.049305    6132 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-316400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.94.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 05:26:10.063463    6132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 05:26:10.083693    6132 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0603 05:26:10.084253    6132 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 05:26:10.096103    6132 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 05:26:10.113576    6132 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 05:26:10.113576    6132 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 05:26:10.113576    6132 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 05:26:10.114712    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 05:26:10.114712    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 05:26:10.131884    6132 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 05:26:10.132394    6132 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 05:26:10.132881    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:26:10.139005    6132 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 05:26:10.139005    6132 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 05:26:10.139224    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 05:26:10.139899    6132 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 05:26:10.140882    6132 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 05:26:10.140987    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 05:26:10.180246    6132 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 05:26:10.193358    6132 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 05:26:10.279373    6132 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 05:26:10.292285    6132 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 05:26:10.292521    6132 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 05:26:11.380247    6132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0603 05:26:11.400918    6132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0603 05:26:11.431576    6132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 05:26:11.475075    6132 ssh_runner.go:195] Run: grep 172.17.87.47	control-plane.minikube.internal$ /etc/hosts
	I0603 05:26:11.482306    6132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.87.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:26:11.518013    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:26:11.713021    6132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:26:11.742875    6132 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:26:11.743097    6132 start.go:316] joinCluster: &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:26:11.743703    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 05:26:11.743873    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:26:13.970273    6132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:26:13.970273    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:26:13.970273    6132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:26:16.511847    6132 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:26:16.511847    6132 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:26:16.512150    6132 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:26:16.708200    6132 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bn2tbh.eh309rxrlp1xwiyr --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 05:26:16.708311    6132 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9645915s)
	I0603 05:26:16.708311    6132 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:26:16.708439    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bn2tbh.eh309rxrlp1xwiyr --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-316400-m02"
	I0603 05:26:16.915198    6132 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 05:26:18.246810    6132 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 05:26:18.246810    6132 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0603 05:26:18.246810    6132 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0603 05:26:18.246810    6132 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:26:18.246910    6132 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:26:18.246910    6132 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 05:26:18.246910    6132 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 05:26:18.246910    6132 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002094895s
	I0603 05:26:18.246910    6132 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0603 05:26:18.246910    6132 command_runner.go:130] > This node has joined the cluster:
	I0603 05:26:18.246910    6132 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0603 05:26:18.247002    6132 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0603 05:26:18.247002    6132 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0603 05:26:18.247091    6132 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bn2tbh.eh309rxrlp1xwiyr --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-316400-m02": (1.5386209s)
	I0603 05:26:18.247157    6132 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 05:26:18.468972    6132 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0603 05:26:18.652431    6132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-316400-m02 minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=multinode-316400 minikube.k8s.io/primary=false
	I0603 05:26:18.772597    6132 command_runner.go:130] > node/multinode-316400-m02 labeled
	I0603 05:26:18.772597    6132 start.go:318] duration metric: took 7.0294762s to joinCluster
	I0603 05:26:18.772597    6132 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:26:18.776759    6132 out.go:177] * Verifying Kubernetes components...
	I0603 05:26:18.774149    6132 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:26:18.791555    6132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:26:19.012029    6132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:26:19.040217    6132 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:26:19.041642    6132 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.87.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:26:19.042446    6132 node_ready.go:35] waiting up to 6m0s for node "multinode-316400-m02" to be "Ready" ...
	I0603 05:26:19.042553    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:19.042687    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:19.042687    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:19.042687    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:19.060242    6132 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0603 05:26:19.060301    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:19.060301    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:19.060368    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:19.060368    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:19.060368    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:19.060368    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:19 GMT
	I0603 05:26:19.060403    6132 round_trippers.go:580]     Audit-Id: 0cbdd603-4bee-434f-8a1d-3987860c94fe
	I0603 05:26:19.060403    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:19.060466    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:19.548745    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:19.548824    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:19.548824    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:19.548824    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:19.553060    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:19.553149    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:19.553149    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:19.553250    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:19 GMT
	I0603 05:26:19.553334    6132 round_trippers.go:580]     Audit-Id: 054454a5-146f-43f3-8855-14ef88f83e77
	I0603 05:26:19.553334    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:19.553334    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:19.553334    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:19.553334    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:19.553334    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:20.049351    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:20.049607    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:20.049607    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:20.049607    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:20.053301    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:20.053301    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:20.053301    6132 round_trippers.go:580]     Audit-Id: d7a89103-3c26-407c-bf3d-d6ec716f9e1c
	I0603 05:26:20.053301    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:20.053609    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:20.053609    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:20.053609    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:20.053609    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:20.053609    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:20 GMT
	I0603 05:26:20.053775    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:20.551073    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:20.551128    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:20.551128    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:20.551128    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:20.553965    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:26:20.553965    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:20.555005    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:20.555049    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:20.555049    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:20.555049    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:20.555049    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:20.555049    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:20 GMT
	I0603 05:26:20.555049    6132 round_trippers.go:580]     Audit-Id: 0e75b820-d7b6-4e0e-a958-ee29b41e88b6
	I0603 05:26:20.555233    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:21.050136    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:21.050357    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:21.050357    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:21.050357    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:21.055025    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:21.055025    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:21.055095    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:21 GMT
	I0603 05:26:21.055095    6132 round_trippers.go:580]     Audit-Id: 8fc4c05b-e5a3-4e4d-92b3-4b689cb03f8c
	I0603 05:26:21.055095    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:21.055095    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:21.055095    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:21.055095    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:21.055095    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:21.055331    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:21.055763    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:21.551399    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:21.551675    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:21.551675    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:21.551675    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:21.555584    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:21.555675    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:21.555675    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:21.555675    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:21.555675    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:21.555675    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:21.555675    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:21 GMT
	I0603 05:26:21.555675    6132 round_trippers.go:580]     Audit-Id: 34ef6e62-62a6-4a43-a577-5b8d2f6b85a6
	I0603 05:26:21.555675    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:21.555848    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:22.050342    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:22.050446    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:22.050446    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:22.050446    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:22.054552    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:22.054552    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:22.054552    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:22.054552    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:22.054552    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:22.054552    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:22 GMT
	I0603 05:26:22.054860    6132 round_trippers.go:580]     Audit-Id: e28a13da-ed29-45f7-806b-4271bc1577d4
	I0603 05:26:22.054860    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:22.054884    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:22.055043    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:22.553483    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:22.553483    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:22.553483    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:22.553725    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:22.558022    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:22.558022    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:22.558022    6132 round_trippers.go:580]     Audit-Id: bf8db466-08d2-4cd4-9dc0-80d1f789d4a8
	I0603 05:26:22.558022    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:22.558502    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:22.558502    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:22.558502    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:22.558502    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:22.558502    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:22 GMT
	I0603 05:26:22.558502    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:23.051225    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:23.051225    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:23.051563    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:23.051563    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:23.054986    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:23.054986    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:23.055978    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:23 GMT
	I0603 05:26:23.055978    6132 round_trippers.go:580]     Audit-Id: 5447f43c-9450-4fdd-9f34-d712c37301f0
	I0603 05:26:23.056009    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:23.056009    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:23.056009    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:23.056009    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:23.056009    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:23.056142    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:23.056287    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:23.556937    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:23.557017    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:23.557017    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:23.557017    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:23.564327    6132 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:26:23.564327    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:23.564705    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:23.564705    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:23.564705    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:23.564705    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:23.564705    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:23.564705    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:23 GMT
	I0603 05:26:23.564705    6132 round_trippers.go:580]     Audit-Id: 11611c08-13ec-490e-86d2-fad353ea8cb0
	I0603 05:26:23.564873    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:24.056335    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:24.056405    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:24.056405    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:24.056405    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:24.061070    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:24.061070    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:24.061070    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:24 GMT
	I0603 05:26:24.061070    6132 round_trippers.go:580]     Audit-Id: a9bd4717-9c12-49e7-a1da-2b09ae46a2d4
	I0603 05:26:24.061326    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:24.061326    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:24.061326    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:24.061326    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:24.061326    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:24.061467    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:24.543258    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:24.543258    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:24.543258    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:24.543258    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:24.548656    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:26:24.548656    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:24.549103    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:24.549103    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:24.549103    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:24.549103    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:24.549103    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:24.549103    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:24 GMT
	I0603 05:26:24.549103    6132 round_trippers.go:580]     Audit-Id: 40c8957d-38fc-4136-b8af-26f716d58453
	I0603 05:26:24.549199    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:25.052225    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:25.052280    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:25.052280    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:25.052280    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:25.055887    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:25.056087    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:25.056087    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:25.056087    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:25.056087    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:25.056087    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:25.056087    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:25.056087    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:25 GMT
	I0603 05:26:25.056150    6132 round_trippers.go:580]     Audit-Id: 76056f5e-75cb-49dc-ae27-1630ee9b6d0a
	I0603 05:26:25.056150    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:25.544816    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:25.544816    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:25.544816    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:25.544816    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:25.548199    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:25.549128    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:25.549128    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:25.549128    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:25.549128    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:25.549175    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:25.549175    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:25 GMT
	I0603 05:26:25.549175    6132 round_trippers.go:580]     Audit-Id: 448af69e-2c02-440d-bdb2-37ed5aae0606
	I0603 05:26:25.549175    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:25.549382    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:25.549486    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:26.053351    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:26.053351    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:26.053351    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:26.053351    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:26.058827    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:26:26.058827    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:26.058827    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:26.058827    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:26 GMT
	I0603 05:26:26.058827    6132 round_trippers.go:580]     Audit-Id: ea39dfef-8c5c-4f8e-b12e-251adc27ffa8
	I0603 05:26:26.058827    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:26.058827    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:26.058827    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:26.058827    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:26.058979    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:26.546870    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:26.546870    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:26.546870    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:26.546870    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:26.551780    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:26.552017    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:26.552017    6132 round_trippers.go:580]     Audit-Id: 828704c4-3029-4c57-a728-c8c5b1b1cbfd
	I0603 05:26:26.552017    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:26.552017    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:26.552017    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:26.552017    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:26.552017    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:26.552017    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:26 GMT
	I0603 05:26:26.552396    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:27.052013    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:27.052013    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:27.052013    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:27.052013    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:27.059864    6132 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:26:27.059864    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:27.059961    6132 round_trippers.go:580]     Audit-Id: c40c26b9-fa9e-4d0a-af3b-2cb9d3384c79
	I0603 05:26:27.059961    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:27.059961    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:27.059961    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:27.059961    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:27.059961    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:27.059961    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:27 GMT
	I0603 05:26:27.059961    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:27.557992    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:27.558247    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:27.558247    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:27.558247    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:27.562104    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:27.562104    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:27.562104    6132 round_trippers.go:580]     Audit-Id: a8c6439e-c553-475d-8685-0e3f3c215eb0
	I0603 05:26:27.562209    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:27.562209    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:27.562209    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:27.562209    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:27.562209    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:27.562209    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:27 GMT
	I0603 05:26:27.562367    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:27.562595    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:28.045999    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:28.045999    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:28.045999    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:28.045999    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:28.318663    6132 round_trippers.go:574] Response Status: 200 OK in 272 milliseconds
	I0603 05:26:28.319048    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:28.319048    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:28.319048    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:28.319048    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:28.319048    6132 round_trippers.go:580]     Content-Length: 4029
	I0603 05:26:28.319048    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:28 GMT
	I0603 05:26:28.319048    6132 round_trippers.go:580]     Audit-Id: 72d227af-092c-48f0-9823-34f6c8f661c9
	I0603 05:26:28.319048    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:28.321058    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"589","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0603 05:26:28.549777    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:28.549777    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:28.549777    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:28.549777    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:28.862039    6132 round_trippers.go:574] Response Status: 200 OK in 312 milliseconds
	I0603 05:26:28.862461    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:28.862461    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:28.862461    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:28.862543    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:28 GMT
	I0603 05:26:28.862543    6132 round_trippers.go:580]     Audit-Id: 1f6cea95-83e7-4342-9477-e87194c31eed
	I0603 05:26:28.862543    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:28.862543    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:28.863094    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:29.049180    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:29.049180    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:29.049180    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:29.049180    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:29.054496    6132 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:26:29.054496    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:29.054496    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:29.054496    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:29.054496    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:29.054496    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:29.054496    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:29 GMT
	I0603 05:26:29.054496    6132 round_trippers.go:580]     Audit-Id: 58ee1cc4-6047-47d9-9509-ca8b532a75a1
	I0603 05:26:29.055627    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:29.544221    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:29.544221    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:29.544221    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:29.544221    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:29.550227    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:26:29.550227    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:29.550227    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:29.550227    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:29.550227    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:29 GMT
	I0603 05:26:29.550227    6132 round_trippers.go:580]     Audit-Id: 53a4367a-ad45-4afe-beb8-ad086f663a38
	I0603 05:26:29.550529    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:29.550529    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:29.550820    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:30.046868    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:30.046868    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:30.046868    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:30.046868    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:30.050623    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:30.050683    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:30.050683    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:30.050683    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:30 GMT
	I0603 05:26:30.050683    6132 round_trippers.go:580]     Audit-Id: fb064e0f-89d5-4b32-8ebc-d1d550983644
	I0603 05:26:30.050683    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:30.050683    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:30.050683    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:30.051479    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:30.051567    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:30.556059    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:30.556059    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:30.556059    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:30.556059    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:30.559650    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:30.559650    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:30.559650    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:30 GMT
	I0603 05:26:30.559650    6132 round_trippers.go:580]     Audit-Id: ccac57b7-14d4-49ae-9577-7bb9aa42ac2d
	I0603 05:26:30.559650    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:30.559650    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:30.559650    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:30.559650    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:30.559650    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:31.050295    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:31.050387    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:31.050387    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:31.050387    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:31.055038    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:31.055038    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:31.055038    6132 round_trippers.go:580]     Audit-Id: 8b5193d8-e42c-48b5-853f-0db73f9acde7
	I0603 05:26:31.055038    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:31.055038    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:31.055038    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:31.055038    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:31.055038    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:31 GMT
	I0603 05:26:31.055038    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:31.544431    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:31.544510    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:31.544543    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:31.544543    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:31.547850    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:31.548631    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:31.548631    6132 round_trippers.go:580]     Audit-Id: 75308c14-5fd6-4510-87dd-cf02ab98eac8
	I0603 05:26:31.548631    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:31.548631    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:31.548631    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:31.548631    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:31.548707    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:31 GMT
	I0603 05:26:31.548972    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:32.048835    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:32.048835    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:32.048835    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:32.048835    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:32.052829    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:32.052829    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:32.052829    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:32.052829    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:32.052829    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:32 GMT
	I0603 05:26:32.052829    6132 round_trippers.go:580]     Audit-Id: b35577f7-b8db-4905-8ca4-f1b1711d7429
	I0603 05:26:32.052829    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:32.052829    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:32.052829    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:32.053821    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:32.556338    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:32.556405    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:32.556405    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:32.556448    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:32.560127    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:32.560322    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:32.560322    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:32.560322    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:32.560322    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:32.560322    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:32 GMT
	I0603 05:26:32.560322    6132 round_trippers.go:580]     Audit-Id: 2c1e9fdb-1161-4118-9760-5f66ca769e9f
	I0603 05:26:32.560322    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:32.560402    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:33.044095    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:33.044095    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:33.044095    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:33.044095    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:33.048798    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:33.048798    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:33.048798    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:33.048798    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:33.048798    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:33 GMT
	I0603 05:26:33.048798    6132 round_trippers.go:580]     Audit-Id: 70634dcd-699b-40e4-970c-aa43e5bf56b3
	I0603 05:26:33.048798    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:33.048798    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:33.048798    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:33.544626    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:33.544873    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:33.544873    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:33.544873    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:33.760793    6132 round_trippers.go:574] Response Status: 200 OK in 215 milliseconds
	I0603 05:26:33.760793    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:33.760793    6132 round_trippers.go:580]     Audit-Id: 09638753-5418-4031-aa99-3fca401c96ae
	I0603 05:26:33.760793    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:33.760793    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:33.760793    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:33.760793    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:33.760793    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:33 GMT
	I0603 05:26:33.761423    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:34.045623    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:34.045623    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:34.045623    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:34.045623    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:34.050282    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:34.050991    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:34.050991    6132 round_trippers.go:580]     Audit-Id: 59201870-d36b-4299-8c22-427f1e3cbf50
	I0603 05:26:34.050991    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:34.050991    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:34.050991    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:34.050991    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:34.050991    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:34 GMT
	I0603 05:26:34.051274    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:34.548855    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:34.548855    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:34.548855    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:34.548855    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:34.552521    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:34.552521    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:34.552521    6132 round_trippers.go:580]     Audit-Id: f9789dfc-b7c3-42f5-9614-b77d8a7123ba
	I0603 05:26:34.552521    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:34.552521    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:34.552521    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:34.552521    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:34.552521    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:34 GMT
	I0603 05:26:34.553231    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:34.553706    6132 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:26:35.046616    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:35.046835    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:35.046835    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:35.046835    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:35.050666    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:35.051345    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:35.051345    6132 round_trippers.go:580]     Audit-Id: 15ddb28b-11f1-41b0-9ea1-810e3fde5fd0
	I0603 05:26:35.051345    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:35.051345    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:35.051345    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:35.051345    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:35.051345    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:35 GMT
	I0603 05:26:35.051751    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:35.546163    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:35.546163    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:35.546455    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:35.546455    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:35.549834    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:35.550303    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:35.550303    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:35.550303    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:35.550303    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:35 GMT
	I0603 05:26:35.550303    6132 round_trippers.go:580]     Audit-Id: 840988ab-2494-46fe-821e-7ccd55f836f2
	I0603 05:26:35.550303    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:35.550303    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:35.550548    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:36.046445    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:36.046519    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:36.046590    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:36.046590    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:36.050403    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:36.051375    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:36.051375    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:36 GMT
	I0603 05:26:36.051375    6132 round_trippers.go:580]     Audit-Id: de2bd27b-318d-41cf-9534-8f916b6822fb
	I0603 05:26:36.051375    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:36.051375    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:36.051375    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:36.051375    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:36.051646    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:36.546166    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:36.546241    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:36.546241    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:36.546241    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:36.549828    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:36.550166    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:36.550166    6132 round_trippers.go:580]     Audit-Id: 062ec519-d017-48b0-9150-d238eff85a7d
	I0603 05:26:36.550236    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:36.550236    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:36.550236    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:36.550236    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:36.550236    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:36 GMT
	I0603 05:26:36.550236    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"603","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0603 05:26:37.044153    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:37.044153    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.044153    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.044153    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.053855    6132 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:26:37.054006    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.054006    6132 round_trippers.go:580]     Audit-Id: f82ae90c-99a6-496d-85af-822e3781e186
	I0603 05:26:37.054132    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.054132    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.054132    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.054132    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.054132    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.054439    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"624","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3263 chars]
	I0603 05:26:37.054762    6132 node_ready.go:49] node "multinode-316400-m02" has status "Ready":"True"
	I0603 05:26:37.054762    6132 node_ready.go:38] duration metric: took 18.0121472s for node "multinode-316400-m02" to be "Ready" ...
	I0603 05:26:37.054762    6132 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:26:37.054762    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods
	I0603 05:26:37.054762    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.054762    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.054762    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.061069    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:26:37.061069    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.061069    6132 round_trippers.go:580]     Audit-Id: 24d6cc67-603e-4569-97c7-a37f25556a87
	I0603 05:26:37.061069    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.061069    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.061069    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.061069    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.061069    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.063039    6132 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"624"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"422","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70380 chars]
	I0603 05:26:37.067450    6132 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.067609    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:26:37.067609    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.067680    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.067680    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.070116    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:26:37.070116    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.070844    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.070844    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.070844    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.070844    6132 round_trippers.go:580]     Audit-Id: 6401175b-0c10-4d84-bf0c-5de871760301
	I0603 05:26:37.070844    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.070844    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.073185    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"422","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0603 05:26:37.073456    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:37.073456    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.073456    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.073456    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.077933    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:37.077990    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.078042    6132 round_trippers.go:580]     Audit-Id: 06e1e407-3f0f-4391-9e86-060fa8b3f473
	I0603 05:26:37.078042    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.078042    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.078042    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.078081    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.078081    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.081075    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0603 05:26:37.081075    6132 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:37.081075    6132 pod_ready.go:81] duration metric: took 13.6251ms for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.081075    6132 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.081075    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:26:37.081977    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.081977    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.081977    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.084029    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:26:37.084972    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.084972    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.084972    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.084972    6132 round_trippers.go:580]     Audit-Id: 15d15851-2682-4e59-95eb-09d6e8a38e9a
	I0603 05:26:37.084972    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.084972    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.084972    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.084972    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"5a3b396d-1240-4c67-b2f5-e5664e068bfe","resourceVersion":"383","creationTimestamp":"2024-06-03T12:23:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.87.47:2379","kubernetes.io/config.hash":"b79ce6c8ebbce53597babbe73b1962c9","kubernetes.io/config.mirror":"b79ce6c8ebbce53597babbe73b1962c9","kubernetes.io/config.seen":"2024-06-03T12:22:56.267029490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0603 05:26:37.084972    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:37.084972    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.084972    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.084972    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.091034    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:26:37.091034    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.091034    6132 round_trippers.go:580]     Audit-Id: c3ee92b6-aebb-443d-8697-af9bcbc73f77
	I0603 05:26:37.091034    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.091034    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.091034    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.091034    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.091034    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.092036    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0603 05:26:37.092036    6132 pod_ready.go:92] pod "etcd-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:37.092036    6132 pod_ready.go:81] duration metric: took 10.9602ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.092036    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.092036    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:26:37.092036    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.092036    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.092036    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.095039    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:37.096073    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.096073    6132 round_trippers.go:580]     Audit-Id: cb18aac4-345f-4fda-9e30-46388dd07560
	I0603 05:26:37.096073    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.096073    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.096073    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.096073    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.096073    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.096141    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"0cdcee20-9dca-4eca-b92f-a7214368dd5e","resourceVersion":"381","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.87.47:8443","kubernetes.io/config.hash":"171c5f025e4267e9949ddac2f1863980","kubernetes.io/config.mirror":"171c5f025e4267e9949ddac2f1863980","kubernetes.io/config.seen":"2024-06-03T12:22:56.267035289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0603 05:26:37.096141    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:37.096141    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.096141    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.096141    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.103242    6132 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:26:37.103242    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.103242    6132 round_trippers.go:580]     Audit-Id: 7c3a61a9-f4fb-4c13-9424-00c8033bb4a8
	I0603 05:26:37.103242    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.103242    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.103242    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.103242    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.103242    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.103242    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0603 05:26:37.103978    6132 pod_ready.go:92] pod "kube-apiserver-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:37.103978    6132 pod_ready.go:81] duration metric: took 11.9423ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.103978    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.103978    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:26:37.103978    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.103978    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.103978    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.106526    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:26:37.107398    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.107398    6132 round_trippers.go:580]     Audit-Id: 5d3210e9-f989-42ea-b152-df5751d07207
	I0603 05:26:37.107455    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.107546    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.107546    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.107546    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.107546    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.107973    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"384","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0603 05:26:37.108526    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:37.108526    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.108526    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.108526    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.111439    6132 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:26:37.111439    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.111439    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.111439    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.111439    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.111439    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.111439    6132 round_trippers.go:580]     Audit-Id: e3dc05ab-baeb-4739-8f83-e773d9597b00
	I0603 05:26:37.111439    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.111439    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0603 05:26:37.112420    6132 pod_ready.go:92] pod "kube-controller-manager-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:37.112420    6132 pod_ready.go:81] duration metric: took 8.4425ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.112420    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.248523    6132 request.go:629] Waited for 135.907ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:26:37.248599    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:26:37.248599    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.248599    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.248672    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.253600    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:37.253600    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.253985    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.253985    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.253985    6132 round_trippers.go:580]     Audit-Id: ad478334-acc0-47e7-9427-7da3449743e1
	I0603 05:26:37.253985    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.253985    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.253985    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.254189    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"376","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0603 05:26:37.449382    6132 request.go:629] Waited for 194.1966ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:37.449382    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:37.449382    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.449382    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.449382    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.453081    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:37.453611    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.453611    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.453611    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.453611    6132 round_trippers.go:580]     Audit-Id: e137b18b-507a-4205-84ca-7818ee1476ef
	I0603 05:26:37.453611    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.453611    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.453611    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.454075    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0603 05:26:37.454978    6132 pod_ready.go:92] pod "kube-proxy-ks64x" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:37.454978    6132 pod_ready.go:81] duration metric: took 342.5568ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.454978    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.652293    6132 request.go:629] Waited for 197.0895ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:26:37.652670    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:26:37.652799    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.652799    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.652799    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.656626    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:37.657075    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.657075    6132 round_trippers.go:580]     Audit-Id: 3fd10e79-989e-4b50-ab9a-671e99b4c453
	I0603 05:26:37.657075    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.657075    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.657075    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.657075    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.657075    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.657338    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"609","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0603 05:26:37.855016    6132 request.go:629] Waited for 196.9827ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:37.855442    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:26:37.855442    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:37.855442    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:37.855442    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:37.860234    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:37.860274    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:37.860274    6132 round_trippers.go:580]     Audit-Id: e6a3b12f-c164-4b5a-bec3-da57461234cd
	I0603 05:26:37.860274    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:37.860274    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:37.860274    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:37.860274    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:37.860274    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:37 GMT
	I0603 05:26:37.860274    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"624","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3263 chars]
	I0603 05:26:37.861094    6132 pod_ready.go:92] pod "kube-proxy-z26hc" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:37.861094    6132 pod_ready.go:81] duration metric: took 406.1144ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:37.861094    6132 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:38.056585    6132 request.go:629] Waited for 195.1277ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:26:38.056585    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:26:38.056585    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:38.056585    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:38.056585    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:38.061243    6132 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:26:38.061302    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:38.061302    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:38.061302    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:38.061302    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:38.061302    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:38.061302    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:38 GMT
	I0603 05:26:38.061302    6132 round_trippers.go:580]     Audit-Id: af63f5a4-46a2-40e6-86b8-1cc9fd79d9da
	I0603 05:26:38.061302    6132 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"382","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0603 05:26:38.247233    6132 request.go:629] Waited for 184.9231ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:38.247355    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes/multinode-316400
	I0603 05:26:38.247355    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:38.247355    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:38.247355    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:38.254323    6132 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:26:38.254323    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:38.254323    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:38 GMT
	I0603 05:26:38.254323    6132 round_trippers.go:580]     Audit-Id: 6825c3f6-2d0a-469d-992b-c4640a9e9f03
	I0603 05:26:38.254323    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:38.254323    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:38.254323    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:38.254323    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:38.255084    6132 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0603 05:26:38.255084    6132 pod_ready.go:92] pod "kube-scheduler-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:26:38.255615    6132 pod_ready.go:81] duration metric: took 394.5199ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:26:38.255688    6132 pod_ready.go:38] duration metric: took 1.2009224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:26:38.255688    6132 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 05:26:38.268489    6132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:26:38.293012    6132 system_svc.go:56] duration metric: took 37.324ms WaitForService to wait for kubelet
	I0603 05:26:38.293012    6132 kubeadm.go:576] duration metric: took 19.5203485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:26:38.293012    6132 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:26:38.449519    6132 request.go:629] Waited for 156.253ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.87.47:8443/api/v1/nodes
	I0603 05:26:38.449519    6132 round_trippers.go:463] GET https://172.17.87.47:8443/api/v1/nodes
	I0603 05:26:38.449714    6132 round_trippers.go:469] Request Headers:
	I0603 05:26:38.449714    6132 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:26:38.449777    6132 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:26:38.453164    6132 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:26:38.453164    6132 round_trippers.go:577] Response Headers:
	I0603 05:26:38.453421    6132 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:26:38.453421    6132 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:26:38 GMT
	I0603 05:26:38.453421    6132 round_trippers.go:580]     Audit-Id: 4094fa4d-7c2b-4f2a-bd5c-a6fc98ad921f
	I0603 05:26:38.453421    6132 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:26:38.453421    6132 round_trippers.go:580]     Content-Type: application/json
	I0603 05:26:38.453421    6132 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:26:38.454156    6132 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"430","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9146 chars]
	I0603 05:26:38.455045    6132 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:26:38.455045    6132 node_conditions.go:123] node cpu capacity is 2
	I0603 05:26:38.455045    6132 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:26:38.455045    6132 node_conditions.go:123] node cpu capacity is 2
	I0603 05:26:38.455045    6132 node_conditions.go:105] duration metric: took 162.0321ms to run NodePressure ...
	I0603 05:26:38.455045    6132 start.go:240] waiting for startup goroutines ...
	I0603 05:26:38.455045    6132 start.go:254] writing updated cluster config ...
	I0603 05:26:38.469421    6132 ssh_runner.go:195] Run: rm -f paused
	I0603 05:26:38.613500    6132 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 05:26:38.617998    6132 out.go:177] * Done! kubectl is now configured to use "multinode-316400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.497692071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.510548358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.510752055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.511561041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.511847137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:23:31 multinode-316400 cri-dockerd[1217]: time="2024-06-03T12:23:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 12:23:31 multinode-316400 cri-dockerd[1217]: time="2024-06-03T12:23:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e/resolv.conf as [nameserver 172.17.80.1]"
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.902616660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.902737157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.902812856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.903037652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.982596193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.982901688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.983061985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:23:31 multinode-316400 dockerd[1315]: time="2024-06-03T12:23:31.983251581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:27:03 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:03.733315170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:27:03 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:03.733456770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:27:03 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:03.733479770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:27:03 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:03.736409269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:27:03 multinode-316400 cri-dockerd[1217]: time="2024-06-03T12:27:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 12:27:05 multinode-316400 cri-dockerd[1217]: time="2024-06-03T12:27:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 03 12:27:05 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:05.364525077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:27:05 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:05.367175951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:27:05 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:05.367520347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:27:05 multinode-316400 dockerd[1315]: time="2024-06-03T12:27:05.368131641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	8280b39046781       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	f3d3a474bbe63       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   4956a24c17e70       storage-provisioner
	a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	ad08c7b8f3aff       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	29c39ff8468f2       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   77f0d5d979f87       etcd-multinode-316400
	f39be6db7a1f8       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	8c884e5bfb961       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   10b8b906c7ece       kube-apiserver-multinode-316400
	3d7dc29a57912       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	
	
	==> coredns [8280b3904678] <==
	[INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	[INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	[INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	[INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	[INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	[INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	[INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	[INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	[INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	[INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	[INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	[INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	[INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	[INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	[INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	[INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	[INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	[INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	[INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	[INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	[INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	[INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	[INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	[INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	[INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	
	
	==> describe nodes <==
	Name:               multinode-316400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-316400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-316400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-316400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:27:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:27:40 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:27:40 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:27:40 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:27:40 +0000   Mon, 03 Jun 2024 12:23:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.87.47
	  Hostname:    multinode-316400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f279e5c5372476b977b38d462c184dc
	  System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	  Boot ID:                    ff4b2e14-cd57-4a29-8fbf-9ea0c2371a40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pm79t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 etcd-multinode-316400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-4hpsl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m34s
	  kube-system                 kube-apiserver-multinode-316400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-multinode-316400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-ks64x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-scheduler-multinode-316400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m32s  kube-proxy       
	  Normal  Starting                 4m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m35s  node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	  Normal  NodeReady                4m23s  kubelet          Node multinode-316400 status is now: NodeReady
	
	
	Name:               multinode-316400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-316400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-316400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-316400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:27:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:27:19 +0000   Mon, 03 Jun 2024 12:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:27:19 +0000   Mon, 03 Jun 2024 12:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:27:19 +0000   Mon, 03 Jun 2024 12:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:27:19 +0000   Mon, 03 Jun 2024 12:26:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.94.201
	  Hostname:    multinode-316400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	  System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	  Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hmxqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-789v5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      96s
	  kube-system                 kube-proxy-z26hc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s (x2 over 96s)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x2 over 96s)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x2 over 96s)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	  Normal  NodeReady                77s                kubelet          Node multinode-316400-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.689102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +49.321680] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.169513] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Jun 3 12:22] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.108050] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.529694] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	[  +0.198733] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.261220] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +2.780963] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +0.190608] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.177678] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.294197] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[ +11.218944] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.114725] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.311872] systemd-fstab-generator[1499]: Ignoring "noauto" option for root device
	[  +5.544034] systemd-fstab-generator[1691]: Ignoring "noauto" option for root device
	[  +0.099240] kauditd_printk_skb: 73 callbacks suppressed
	[Jun 3 12:23] systemd-fstab-generator[2102]: Ignoring "noauto" option for root device
	[  +0.141626] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.266358] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +0.229302] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.545061] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.707613] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [29c39ff8468f] <==
	{"level":"warn","ts":"2024-06-03T12:23:26.275079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.412345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T12:23:26.275169Z","caller":"traceutil/trace.go:171","msg":"trace[1881463544] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:386; }","duration":"182.534742ms","start":"2024-06-03T12:23:26.092616Z","end":"2024-06-03T12:23:26.27515Z","steps":["trace[1881463544] 'range keys from in-memory index tree'  (duration: 182.23715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:23:26.275355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.02104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-316400\" ","response":"range_response_count:1 size:4485"}
	{"level":"info","ts":"2024-06-03T12:23:26.275378Z","caller":"traceutil/trace.go:171","msg":"trace[1463005758] range","detail":"{range_begin:/registry/minions/multinode-316400; range_end:; response_count:1; response_revision:386; }","duration":"144.06894ms","start":"2024-06-03T12:23:26.131303Z","end":"2024-06-03T12:23:26.275372Z","steps":["trace[1463005758] 'range keys from in-memory index tree'  (duration: 143.853245ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:23:44.518417Z","caller":"traceutil/trace.go:171","msg":"trace[1895398383] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"154.547367ms","start":"2024-06-03T12:23:44.363849Z","end":"2024-06-03T12:23:44.518396Z","steps":["trace[1895398383] 'process raft request'  (duration: 154.423166ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:26:11.919595Z","caller":"traceutil/trace.go:171","msg":"trace[602224079] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"121.275303ms","start":"2024-06-03T12:26:11.798301Z","end":"2024-06-03T12:26:11.919576Z","steps":["trace[602224079] 'process raft request'  (duration: 120.786203ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:28.315198Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.776283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:2 size:7353"}
	{"level":"info","ts":"2024-06-03T12:26:28.315396Z","caller":"traceutil/trace.go:171","msg":"trace[1561491028] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:2; response_revision:601; }","duration":"145.957083ms","start":"2024-06-03T12:26:28.169373Z","end":"2024-06-03T12:26:28.31533Z","steps":["trace[1561491028] 'range keys from in-memory index tree'  (duration: 145.585383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:28.315893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.741268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-316400-m02\" ","response":"range_response_count:1 size:2847"}
	{"level":"info","ts":"2024-06-03T12:26:28.316221Z","caller":"traceutil/trace.go:171","msg":"trace[853361091] range","detail":"{range_begin:/registry/minions/multinode-316400-m02; range_end:; response_count:1; response_revision:601; }","duration":"265.092568ms","start":"2024-06-03T12:26:28.051114Z","end":"2024-06-03T12:26:28.316207Z","steps":["trace[853361091] 'range keys from in-memory index tree'  (duration: 263.953968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:28.316722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.004569ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-03T12:26:28.317227Z","caller":"traceutil/trace.go:171","msg":"trace[1308544811] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:601; }","duration":"252.649869ms","start":"2024-06-03T12:26:28.064563Z","end":"2024-06-03T12:26:28.317213Z","steps":["trace[1308544811] 'range keys from in-memory index tree'  (duration: 251.939369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:28.31769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.069472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T12:26:28.317746Z","caller":"traceutil/trace.go:171","msg":"trace[1766682021] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:601; }","duration":"230.140672ms","start":"2024-06-03T12:26:28.087597Z","end":"2024-06-03T12:26:28.317737Z","steps":["trace[1766682021] 'range keys from in-memory index tree'  (duration: 230.027972ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:26:28.318839Z","caller":"traceutil/trace.go:171","msg":"trace[293042771] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"138.900983ms","start":"2024-06-03T12:26:28.179927Z","end":"2024-06-03T12:26:28.318829Z","steps":["trace[293042771] 'process raft request'  (duration: 92.676689ms)","trace[293042771] 'compare'  (duration: 42.393195ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:26:28.579714Z","caller":"traceutil/trace.go:171","msg":"trace[1985705965] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"253.969369ms","start":"2024-06-03T12:26:28.325705Z","end":"2024-06-03T12:26:28.579674Z","steps":["trace[1985705965] 'process raft request'  (duration: 243.479571ms)","trace[1985705965] 'compare'  (duration: 10.415898ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T12:26:28.862099Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.256669ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5073463164104943402 > lease_revoke:<id:46688fde0d81268f>","response":"size:27"}
	{"level":"info","ts":"2024-06-03T12:26:28.862214Z","caller":"traceutil/trace.go:171","msg":"trace[1322707067] linearizableReadLoop","detail":"{readStateIndex:658; appliedIndex:656; }","duration":"306.589963ms","start":"2024-06-03T12:26:28.55561Z","end":"2024-06-03T12:26:28.8622Z","steps":["trace[1322707067] 'read index received'  (duration: 13.499799ms)","trace[1322707067] 'applied index is now lower than readState.Index'  (duration: 293.088964ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T12:26:28.862722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.198063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-316400-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-06-03T12:26:28.862803Z","caller":"traceutil/trace.go:171","msg":"trace[1362609923] range","detail":"{range_begin:/registry/minions/multinode-316400-m02; range_end:; response_count:1; response_revision:604; }","duration":"307.310363ms","start":"2024-06-03T12:26:28.555484Z","end":"2024-06-03T12:26:28.862794Z","steps":["trace[1362609923] 'agreement among raft nodes before linearized reading'  (duration: 307.034863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:28.862834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:28.555467Z","time spent":"307.359863ms","remote":"127.0.0.1:43944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3170,"request content":"key:\"/registry/minions/multinode-316400-m02\" "}
	{"level":"warn","ts":"2024-06-03T12:26:33.760497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.562767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-316400-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-06-03T12:26:33.760957Z","caller":"traceutil/trace.go:171","msg":"trace[1852931004] range","detail":"{range_begin:/registry/minions/multinode-316400-m02; range_end:; response_count:1; response_revision:614; }","duration":"210.201467ms","start":"2024-06-03T12:26:33.55074Z","end":"2024-06-03T12:26:33.760942Z","steps":["trace[1852931004] 'range keys from in-memory index tree'  (duration: 209.385567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:33.761571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.009659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T12:26:33.761816Z","caller":"traceutil/trace.go:171","msg":"trace[1220862400] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:614; }","duration":"261.247159ms","start":"2024-06-03T12:26:33.500505Z","end":"2024-06-03T12:26:33.761752Z","steps":["trace[1220862400] 'count revisions from in-memory index tree'  (duration: 260.586359ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:27:53 up 6 min,  0 users,  load average: 0.55, 0.37, 0.17
	Linux multinode-316400 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a00a9dc2a937] <==
	I0603 12:26:48.362565       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:26:58.371289       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:26:58.371335       1 main.go:227] handling current node
	I0603 12:26:58.371348       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:26:58.371354       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:27:08.386201       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:27:08.386306       1 main.go:227] handling current node
	I0603 12:27:08.386322       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:27:08.386329       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:27:18.399357       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:27:18.399486       1 main.go:227] handling current node
	I0603 12:27:18.399501       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:27:18.399508       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:27:28.413982       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:27:28.414057       1 main.go:227] handling current node
	I0603 12:27:28.414070       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:27:28.414077       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:27:38.421918       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:27:38.421959       1 main.go:227] handling current node
	I0603 12:27:38.421972       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:27:38.421977       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:27:48.435633       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:27:48.435735       1 main.go:227] handling current node
	I0603 12:27:48.435750       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:27:48.435795       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8c884e5bfb96] <==
	I0603 12:23:02.958506       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 12:23:03.034341       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 12:23:03.159077       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0603 12:23:03.173627       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47]
	I0603 12:23:03.176182       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 12:23:03.186039       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 12:23:03.891321       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0603 12:23:04.093108       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"client disconnected"}: client disconnected
	E0603 12:23:04.093282       1 wrap.go:54] timeout or abort while handling: method=POST URI="/api/v1/namespaces/default/events" audit-ID="b8559dde-63c6-4ade-b287-cea9092806dd"
	E0603 12:23:04.093329       1 timeout.go:142] post-timeout activity - time-elapsed: 9.299µs, POST "/api/v1/namespaces/default/events" result: <nil>
	I0603 12:23:04.250643       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 12:23:04.290192       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 12:23:04.306967       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 12:23:18.899651       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0603 12:23:19.056267       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0603 12:27:08.801269       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58325: use of closed network connection
	E0603 12:27:09.230080       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58327: use of closed network connection
	E0603 12:27:09.688692       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58329: use of closed network connection
	E0603 12:27:10.131482       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58331: use of closed network connection
	E0603 12:27:10.548314       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58333: use of closed network connection
	E0603 12:27:11.027901       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58335: use of closed network connection
	E0603 12:27:11.792816       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58338: use of closed network connection
	E0603 12:27:22.211657       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58340: use of closed network connection
	E0603 12:27:22.617031       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58343: use of closed network connection
	E0603 12:27:33.042743       1 conn.go:339] Error on socket receive: read tcp 172.17.87.47:8443->172.17.80.1:58345: use of closed network connection
	
	
	==> kube-controller-manager [3d7dc29a5791] <==
	I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	
	
	==> kube-proxy [ad08c7b8f3af] <==
	I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f39be6db7a1f] <==
	W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:23:32 multinode-316400 kubelet[2109]: I0603 12:23:32.821337    2109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.821322422 podStartE2EDuration="5.821322422s" podCreationTimestamp="2024-06-03 12:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:23:32.802816395 +0000 UTC m=+28.682572575" watchObservedRunningTime="2024-06-03 12:23:32.821322422 +0000 UTC m=+28.701078602"
	Jun 03 12:24:04 multinode-316400 kubelet[2109]: E0603 12:24:04.352743    2109 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:24:04 multinode-316400 kubelet[2109]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:24:04 multinode-316400 kubelet[2109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:24:04 multinode-316400 kubelet[2109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:24:04 multinode-316400 kubelet[2109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:25:04 multinode-316400 kubelet[2109]: E0603 12:25:04.351113    2109 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:25:04 multinode-316400 kubelet[2109]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:25:04 multinode-316400 kubelet[2109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:25:04 multinode-316400 kubelet[2109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:25:04 multinode-316400 kubelet[2109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:26:04 multinode-316400 kubelet[2109]: E0603 12:26:04.354979    2109 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:26:04 multinode-316400 kubelet[2109]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:26:04 multinode-316400 kubelet[2109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:26:04 multinode-316400 kubelet[2109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:26:04 multinode-316400 kubelet[2109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:27:03 multinode-316400 kubelet[2109]: I0603 12:27:03.153398    2109 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	Jun 03 12:27:03 multinode-316400 kubelet[2109]: I0603 12:27:03.168501    2109 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2hdj\" (UniqueName: \"kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj\") pod \"busybox-fc5497c4f-pm79t\" (UID: \"5a541beb-e22e-41aa-bb76-5e6e82ac0d39\") " pod="default/busybox-fc5497c4f-pm79t"
	Jun 03 12:27:03 multinode-316400 kubelet[2109]: I0603 12:27:03.945968    2109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	Jun 03 12:27:04 multinode-316400 kubelet[2109]: E0603 12:27:04.357529    2109 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:27:04 multinode-316400 kubelet[2109]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:27:04 multinode-316400 kubelet[2109]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:27:04 multinode-316400 kubelet[2109]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:27:04 multinode-316400 kubelet[2109]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:27:06 multinode-316400 kubelet[2109]: I0603 12:27:06.002684    2109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-pm79t" podStartSLOduration=1.8610195090000001 podStartE2EDuration="3.002665598s" podCreationTimestamp="2024-06-03 12:27:03 +0000 UTC" firstStartedPulling="2024-06-03 12:27:04.030267383 +0000 UTC m=+239.910023563" lastFinishedPulling="2024-06-03 12:27:05.171913472 +0000 UTC m=+241.051669652" observedRunningTime="2024-06-03 12:27:06.0025189 +0000 UTC m=+241.882275180" watchObservedRunningTime="2024-06-03 12:27:06.002665598 +0000 UTC m=+241.882421778"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:27:45.394234   10580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-316400 -n multinode-316400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-316400 -n multinode-316400: (12.0365632s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-316400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (56.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (520.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-316400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-316400
E0603 05:43:39.510766    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-316400: (1m35.3651973s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-316400 --wait=true -v=8 --alsologtostderr
E0603 05:47:10.856322    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 05:48:34.095608    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 05:48:39.513317    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-316400 --wait=true -v=8 --alsologtostderr: exit status 1 (6m12.4454289s)

                                                
                                                
-- stdout --
	* [multinode-316400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-316400" primary control-plane node in "multinode-316400" cluster
	* Restarting existing hyperv VM for "multinode-316400" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-316400-m02" worker node in "multinode-316400" cluster
	* Restarting existing hyperv VM for "multinode-316400-m02" ...
	* Found network options:
	  - NO_PROXY=172.17.95.88
	  - NO_PROXY=172.17.95.88
	* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	  - env NO_PROXY=172.17.95.88
	* Verifying Kubernetes components...
	
	* Starting "multinode-316400-m03" worker node in "multinode-316400" cluster
	* Restarting existing hyperv VM for "multinode-316400-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:43:48.809751   10844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 05:43:48.816063   10844 out.go:291] Setting OutFile to fd 1460 ...
	I0603 05:43:48.816923   10844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:43:48.816923   10844 out.go:304] Setting ErrFile to fd 1472...
	I0603 05:43:48.816923   10844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:43:48.835388   10844 out.go:298] Setting JSON to false
	I0603 05:43:48.840840   10844 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7856,"bootTime":1717410772,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 05:43:48.840840   10844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 05:43:48.910410   10844 out.go:177] * [multinode-316400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 05:43:48.973379   10844 notify.go:220] Checking for updates...
	I0603 05:43:49.007199   10844 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:43:49.067130   10844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 05:43:49.115725   10844 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 05:43:49.176193   10844 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 05:43:49.191212   10844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 05:43:49.222521   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:43:49.222521   10844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 05:43:54.451501   10844 out.go:177] * Using the hyperv driver based on existing profile
	I0603 05:43:54.523855   10844 start.go:297] selected driver: hyperv
	I0603 05:43:54.523966   10844 start.go:901] validating driver "hyperv" against &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:43:54.524466   10844 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 05:43:54.574263   10844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:43:54.574498   10844 cni.go:84] Creating CNI manager for ""
	I0603 05:43:54.574579   10844 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 05:43:54.574579   10844 start.go:340] cluster config:
	{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:43:54.575113   10844 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 05:43:54.660619   10844 out.go:177] * Starting "multinode-316400" primary control-plane node in "multinode-316400" cluster
	I0603 05:43:54.697784   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:43:54.703284   10844 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 05:43:54.703284   10844 cache.go:56] Caching tarball of preloaded images
	I0603 05:43:54.703826   10844 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:43:54.704126   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:43:54.704585   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:43:54.707531   10844 start.go:360] acquireMachinesLock for multinode-316400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:43:54.707763   10844 start.go:364] duration metric: took 115.4µs to acquireMachinesLock for "multinode-316400"
	I0603 05:43:54.707996   10844 start.go:96] Skipping create...Using existing machine configuration
	I0603 05:43:54.708102   10844 fix.go:54] fixHost starting: 
	I0603 05:43:54.708760   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:43:57.433627   10844 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:43:57.433627   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:43:57.433764   10844 fix.go:112] recreateIfNeeded on multinode-316400: state=Stopped err=<nil>
	W0603 05:43:57.433764   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 05:43:57.447672   10844 out.go:177] * Restarting existing hyperv VM for "multinode-316400" ...
	I0603 05:43:57.458029   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400
	I0603 05:44:00.557726   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:00.557726   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:00.557726   10844 main.go:141] libmachine: Waiting for host to start...
	I0603 05:44:00.557726   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:02.809771   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:02.809771   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:02.809771   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:05.277634   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:05.277634   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:06.282004   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:08.551271   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:08.551598   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:08.551598   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:11.140571   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:11.140571   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:12.156391   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:14.421680   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:14.421680   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:14.421955   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:16.986756   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:16.986803   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:17.996578   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:20.254690   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:20.254799   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:20.254880   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:22.853590   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:22.853590   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:23.860871   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:26.126650   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:26.127700   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:26.127836   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:28.765100   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:28.765310   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:28.768270   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:30.983922   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:30.984596   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:30.984873   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:33.636435   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:33.637359   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:33.637602   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:44:33.640287   10844 machine.go:94] provisionDockerMachine start ...
	I0603 05:44:33.640381   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:35.824890   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:35.825056   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:35.825133   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:38.433997   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:38.433997   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:38.440668   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:44:38.441193   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:44:38.441424   10844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 05:44:38.572796   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 05:44:38.572796   10844 buildroot.go:166] provisioning hostname "multinode-316400"
	I0603 05:44:38.573096   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:40.687886   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:40.688360   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:40.688360   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:43.250914   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:43.251028   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:43.256529   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:44:43.257052   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:44:43.257183   10844 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-316400 && echo "multinode-316400" | sudo tee /etc/hostname
	I0603 05:44:43.409594   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-316400
	
	I0603 05:44:43.409594   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:45.585770   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:45.586666   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:45.586740   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:48.117050   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:48.117251   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:48.122636   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:44:48.123313   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:44:48.123313   10844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-316400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-316400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-316400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 05:44:48.267373   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 05:44:48.267373   10844 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 05:44:48.267373   10844 buildroot.go:174] setting up certificates
	I0603 05:44:48.267373   10844 provision.go:84] configureAuth start
	I0603 05:44:48.267373   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:50.397193   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:50.398194   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:50.398194   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:52.922079   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:52.922828   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:52.922899   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:55.041046   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:55.041046   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:55.041850   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:57.607314   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:57.607314   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:57.607314   10844 provision.go:143] copyHostCerts
	I0603 05:44:57.607556   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 05:44:57.607628   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 05:44:57.607628   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 05:44:57.608183   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 05:44:57.609499   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 05:44:57.609839   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 05:44:57.609839   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 05:44:57.610232   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 05:44:57.611238   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 05:44:57.611504   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 05:44:57.611504   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 05:44:57.611655   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 05:44:57.612658   10844 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-316400 san=[127.0.0.1 172.17.95.88 localhost minikube multinode-316400]
	I0603 05:44:57.694551   10844 provision.go:177] copyRemoteCerts
	I0603 05:44:57.706699   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 05:44:57.707300   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:59.825776   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:59.826249   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:59.826249   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:02.399629   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:02.399629   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:02.399629   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:02.502175   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7954582s)
	I0603 05:45:02.502175   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 05:45:02.503291   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 05:45:02.548818   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 05:45:02.548910   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 05:45:02.597883   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 05:45:02.598449   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 05:45:02.642864   10844 provision.go:87] duration metric: took 14.3754372s to configureAuth
	I0603 05:45:02.642864   10844 buildroot.go:189] setting minikube options for container-runtime
	I0603 05:45:02.643867   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:45:02.643958   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:04.742801   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:04.742801   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:04.742880   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:07.428026   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:07.428026   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:07.434100   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:07.434348   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:07.434348   10844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 05:45:07.563888   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 05:45:07.563888   10844 buildroot.go:70] root file system type: tmpfs
	I0603 05:45:07.563888   10844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 05:45:07.563888   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:09.755582   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:09.756487   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:09.756487   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:12.303886   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:12.304516   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:12.309939   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:12.310597   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:12.310597   10844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 05:45:12.472332   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 05:45:12.472452   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:14.613050   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:14.613050   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:14.613410   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:17.170955   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:17.171094   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:17.176550   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:17.177233   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:17.177233   10844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 05:45:19.620742   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 05:45:19.620742   10844 machine.go:97] duration metric: took 45.9802558s to provisionDockerMachine
	I0603 05:45:19.620742   10844 start.go:293] postStartSetup for "multinode-316400" (driver="hyperv")
	I0603 05:45:19.620742   10844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 05:45:19.631739   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 05:45:19.632742   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:21.800577   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:21.800717   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:21.800830   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:24.312032   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:24.313038   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:24.313294   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:24.432701   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8009445s)
	I0603 05:45:24.445165   10844 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 05:45:24.454443   10844 command_runner.go:130] > NAME=Buildroot
	I0603 05:45:24.454539   10844 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 05:45:24.454539   10844 command_runner.go:130] > ID=buildroot
	I0603 05:45:24.454539   10844 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 05:45:24.454539   10844 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 05:45:24.454596   10844 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 05:45:24.454596   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 05:45:24.455134   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 05:45:24.456082   10844 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 05:45:24.456143   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 05:45:24.470725   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 05:45:24.490808   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 05:45:24.535234   10844 start.go:296] duration metric: took 4.9144739s for postStartSetup
	I0603 05:45:24.535234   10844 fix.go:56] duration metric: took 1m29.8267995s for fixHost
	I0603 05:45:24.535234   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:26.738491   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:26.738537   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:26.738537   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:29.303844   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:29.304102   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:29.312620   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:29.312838   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:29.312838   10844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 05:45:29.445596   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418729.453516780
	
	I0603 05:45:29.445596   10844 fix.go:216] guest clock: 1717418729.453516780
	I0603 05:45:29.445596   10844 fix.go:229] Guest: 2024-06-03 05:45:29.45351678 -0700 PDT Remote: 2024-06-03 05:45:24.5352342 -0700 PDT m=+95.805785701 (delta=4.91828258s)
	I0603 05:45:29.445596   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:31.631915   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:31.631915   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:31.632511   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:34.166993   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:34.166993   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:34.171595   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:34.172185   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:34.172185   10844 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717418729
	I0603 05:45:34.309869   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:45:29 UTC 2024
	
	I0603 05:45:34.309934   10844 fix.go:236] clock set: Mon Jun  3 12:45:29 UTC 2024
	 (err=<nil>)
	I0603 05:45:34.310002   10844 start.go:83] releasing machines lock for "multinode-316400", held for 1m39.6017028s
	I0603 05:45:34.310154   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:36.417421   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:36.417421   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:36.418195   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:38.986858   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:38.986858   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:38.991392   10844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 05:45:38.991526   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:39.000728   10844 ssh_runner.go:195] Run: cat /version.json
	I0603 05:45:39.001715   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:41.209614   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:41.209614   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:41.209614   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:41.210327   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:41.210327   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:41.210327   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:43.850751   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:43.850751   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:43.850751   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:43.872394   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:43.873137   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:43.873261   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:43.943745   10844 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 05:45:43.943972   10844 ssh_runner.go:235] Completed: cat /version.json: (4.9420123s)
	I0603 05:45:43.959558   10844 ssh_runner.go:195] Run: systemctl --version
	I0603 05:45:44.015709   10844 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 05:45:44.015709   10844 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0242986s)
	I0603 05:45:44.015830   10844 command_runner.go:130] > systemd 252 (252)
	I0603 05:45:44.015830   10844 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 05:45:44.027814   10844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 05:45:44.036653   10844 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 05:45:44.036653   10844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 05:45:44.048619   10844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 05:45:44.078579   10844 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 05:45:44.078579   10844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 05:45:44.078746   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:45:44.079007   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:45:44.112111   10844 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 05:45:44.124848   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 05:45:44.157147   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 05:45:44.177408   10844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 05:45:44.190131   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 05:45:44.224380   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:45:44.262949   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 05:45:44.295838   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:45:44.332622   10844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 05:45:44.364631   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 05:45:44.395593   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 05:45:44.425337   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 05:45:44.455321   10844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 05:45:44.476664   10844 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 05:45:44.489107   10844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 05:45:44.518337   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:44.712162   10844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 05:45:44.744396   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:45:44.756988   10844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 05:45:44.781124   10844 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 05:45:44.781124   10844 command_runner.go:130] > [Unit]
	I0603 05:45:44.781202   10844 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 05:45:44.781202   10844 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 05:45:44.781202   10844 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 05:45:44.781202   10844 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 05:45:44.781202   10844 command_runner.go:130] > StartLimitBurst=3
	I0603 05:45:44.781202   10844 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 05:45:44.781258   10844 command_runner.go:130] > [Service]
	I0603 05:45:44.781258   10844 command_runner.go:130] > Type=notify
	I0603 05:45:44.781258   10844 command_runner.go:130] > Restart=on-failure
	I0603 05:45:44.781258   10844 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 05:45:44.781258   10844 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 05:45:44.781308   10844 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 05:45:44.781308   10844 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 05:45:44.781308   10844 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 05:45:44.781308   10844 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 05:45:44.781384   10844 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 05:45:44.781384   10844 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 05:45:44.781384   10844 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 05:45:44.781442   10844 command_runner.go:130] > ExecStart=
	I0603 05:45:44.781482   10844 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 05:45:44.781534   10844 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 05:45:44.781556   10844 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 05:45:44.781556   10844 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 05:45:44.781584   10844 command_runner.go:130] > LimitNOFILE=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > LimitNPROC=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > LimitCORE=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 05:45:44.781622   10844 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 05:45:44.781622   10844 command_runner.go:130] > TasksMax=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > TimeoutStartSec=0
	I0603 05:45:44.781622   10844 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 05:45:44.781695   10844 command_runner.go:130] > Delegate=yes
	I0603 05:45:44.781695   10844 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 05:45:44.781719   10844 command_runner.go:130] > KillMode=process
	I0603 05:45:44.781748   10844 command_runner.go:130] > [Install]
	I0603 05:45:44.781748   10844 command_runner.go:130] > WantedBy=multi-user.target
	I0603 05:45:44.795062   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:45:44.825265   10844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 05:45:44.860097   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:45:44.892930   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:45:44.929529   10844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 05:45:44.999676   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:45:45.022637   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:45:45.057391   10844 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 05:45:45.068376   10844 ssh_runner.go:195] Run: which cri-dockerd
	I0603 05:45:45.074412   10844 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 05:45:45.085379   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 05:45:45.103812   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 05:45:45.145743   10844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 05:45:45.367351   10844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 05:45:45.559233   10844 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 05:45:45.559541   10844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 05:45:45.603824   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:45.797277   10844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:45:48.437479   10844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6401915s)
	I0603 05:45:48.451204   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 05:45:48.483204   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:45:48.517357   10844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 05:45:48.733337   10844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 05:45:48.937108   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:49.146158   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 05:45:49.188509   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:45:49.224547   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:49.417865   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 05:45:49.526417   10844 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 05:45:49.537714   10844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 05:45:49.547080   10844 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 05:45:49.547214   10844 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 05:45:49.547214   10844 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0603 05:45:49.547214   10844 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 05:45:49.547214   10844 command_runner.go:130] > Access: 2024-06-03 12:45:49.452283219 +0000
	I0603 05:45:49.547214   10844 command_runner.go:130] > Modify: 2024-06-03 12:45:49.452283219 +0000
	I0603 05:45:49.547214   10844 command_runner.go:130] > Change: 2024-06-03 12:45:49.457283264 +0000
	I0603 05:45:49.547214   10844 command_runner.go:130] >  Birth: -
	I0603 05:45:49.547403   10844 start.go:562] Will wait 60s for crictl version
	I0603 05:45:49.560071   10844 ssh_runner.go:195] Run: which crictl
	I0603 05:45:49.565489   10844 command_runner.go:130] > /usr/bin/crictl
	I0603 05:45:49.576897   10844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 05:45:49.629626   10844 command_runner.go:130] > Version:  0.1.0
	I0603 05:45:49.630513   10844 command_runner.go:130] > RuntimeName:  docker
	I0603 05:45:49.630513   10844 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 05:45:49.630513   10844 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 05:45:49.630513   10844 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 05:45:49.639893   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:45:49.670938   10844 command_runner.go:130] > 26.0.2
	I0603 05:45:49.682613   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:45:49.711808   10844 command_runner.go:130] > 26.0.2
	I0603 05:45:49.717677   10844 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 05:45:49.717865   10844 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 05:45:49.724868   10844 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 05:45:49.724868   10844 ip.go:210] interface addr: 172.17.80.1/20
	I0603 05:45:49.740250   10844 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 05:45:49.747348   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:45:49.774754   10844 kubeadm.go:877] updating cluster {Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 05:45:49.775093   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:45:49.784947   10844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 05:45:49.814591   10844 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 05:45:49.815570   10844 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 05:45:49.815570   10844 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 05:45:49.815771   10844 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 05:45:49.815771   10844 docker.go:615] Images already preloaded, skipping extraction
	I0603 05:45:49.825761   10844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 05:45:49.848282   10844 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 05:45:49.848481   10844 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 05:45:49.848481   10844 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 05:45:49.848481   10844 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 05:45:49.848481   10844 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 05:45:49.848481   10844 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 05:45:49.848589   10844 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 05:45:49.848644   10844 cache_images.go:84] Images are preloaded, skipping loading
	I0603 05:45:49.848644   10844 kubeadm.go:928] updating node { 172.17.95.88 8443 v1.30.1 docker true true} ...
	I0603 05:45:49.848948   10844 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-316400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.95.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 05:45:49.858246   10844 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 05:45:49.892506   10844 command_runner.go:130] > cgroupfs
	I0603 05:45:49.893814   10844 cni.go:84] Creating CNI manager for ""
	I0603 05:45:49.893814   10844 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 05:45:49.893814   10844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 05:45:49.893905   10844 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.95.88 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-316400 NodeName:multinode-316400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.95.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.95.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 05:45:49.894199   10844 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.95.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-316400"
	  kubeletExtraArgs:
	    node-ip: 172.17.95.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.95.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 05:45:49.906839   10844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 05:45:49.926616   10844 command_runner.go:130] > kubeadm
	I0603 05:45:49.926616   10844 command_runner.go:130] > kubectl
	I0603 05:45:49.926616   10844 command_runner.go:130] > kubelet
	I0603 05:45:49.926616   10844 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 05:45:49.938257   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 05:45:49.958114   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0603 05:45:49.992902   10844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 05:45:50.023256   10844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0603 05:45:50.067301   10844 ssh_runner.go:195] Run: grep 172.17.95.88	control-plane.minikube.internal$ /etc/hosts
	I0603 05:45:50.073480   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:45:50.111809   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:50.312147   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:45:50.346041   10844 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400 for IP: 172.17.95.88
	I0603 05:45:50.346041   10844 certs.go:194] generating shared ca certs ...
	I0603 05:45:50.346160   10844 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:50.346878   10844 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 05:45:50.347284   10844 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 05:45:50.347496   10844 certs.go:256] generating profile certs ...
	I0603 05:45:50.348108   10844 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.key
	I0603 05:45:50.348222   10844 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17
	I0603 05:45:50.348417   10844 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.88]
	I0603 05:45:50.539063   10844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17 ...
	I0603 05:45:50.539063   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17: {Name:mk5be6417b01220b39e4973282b711a048fd41b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:50.540501   10844 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17 ...
	I0603 05:45:50.540501   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17: {Name:mkc2845c79a22602a493821a7a6efafb1bd00853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:50.541382   10844 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt
	I0603 05:45:50.557330   10844 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key
	I0603 05:45:50.558417   10844 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key
	I0603 05:45:50.558417   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 05:45:50.559495   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 05:45:50.559682   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 05:45:50.559738   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 05:45:50.560058   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 05:45:50.560354   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 05:45:50.561050   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 05:45:50.561050   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 05:45:50.562144   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 05:45:50.562670   10844 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 05:45:50.562916   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 05:45:50.563324   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 05:45:50.563684   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 05:45:50.564158   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 05:45:50.564899   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 05:45:50.565227   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:50.565496   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 05:45:50.565756   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 05:45:50.567324   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 05:45:50.616973   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 05:45:50.668387   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 05:45:50.714540   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 05:45:50.758039   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 05:45:50.806066   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 05:45:50.853517   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 05:45:50.901582   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 05:45:50.947781   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 05:45:50.992386   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 05:45:51.037838   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 05:45:51.080332   10844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 05:45:51.123114   10844 ssh_runner.go:195] Run: openssl version
	I0603 05:45:51.132669   10844 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 05:45:51.144276   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 05:45:51.176695   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.183161   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.183684   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.199231   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.206773   10844 command_runner.go:130] > b5213941
	I0603 05:45:51.217222   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 05:45:51.248588   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 05:45:51.280201   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.288208   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.288208   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.299210   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.310099   10844 command_runner.go:130] > 51391683
	I0603 05:45:51.322261   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 05:45:51.352751   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 05:45:51.385525   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.394018   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.394018   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.406322   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.415944   10844 command_runner.go:130] > 3ec20f2e
	I0603 05:45:51.427945   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 05:45:51.461359   10844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:45:51.469161   10844 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:45:51.469161   10844 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 05:45:51.469161   10844 command_runner.go:130] > Device: 8,1	Inode: 4196168     Links: 1
	I0603 05:45:51.469161   10844 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 05:45:51.469161   10844 command_runner.go:130] > Access: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.469161   10844 command_runner.go:130] > Modify: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.469161   10844 command_runner.go:130] > Change: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.469161   10844 command_runner.go:130] >  Birth: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.480677   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 05:45:51.490675   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.501675   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 05:45:51.510483   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.521820   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 05:45:51.530900   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.542424   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 05:45:51.553093   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.563438   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 05:45:51.571964   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.583661   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 05:45:51.593031   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.593417   10844 kubeadm.go:391] StartCluster: {Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:45:51.603534   10844 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 05:45:51.637170   10844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 05:45:51.658628   10844 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0603 05:45:51.658628   10844 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0603 05:45:51.658628   10844 command_runner.go:130] > /var/lib/minikube/etcd:
	I0603 05:45:51.658628   10844 command_runner.go:130] > member
	W0603 05:45:51.658734   10844 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 05:45:51.658734   10844 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 05:45:51.658734   10844 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 05:45:51.670760   10844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 05:45:51.688309   10844 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 05:45:51.689593   10844 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-316400" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:45:51.690116   10844 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-316400" cluster setting kubeconfig missing "multinode-316400" context setting]
	I0603 05:45:51.691193   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:51.705622   10844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:45:51.707044   10844 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.95.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:45:51.707880   10844 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 05:45:51.720613   10844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 05:45:51.740283   10844 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0603 05:45:51.740328   10844 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0603 05:45:51.740328   10844 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0603 05:45:51.740328   10844 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 05:45:51.740328   10844 command_runner.go:130] >  kind: InitConfiguration
	I0603 05:45:51.740328   10844 command_runner.go:130] >  localAPIEndpoint:
	I0603 05:45:51.740328   10844 command_runner.go:130] > -  advertiseAddress: 172.17.87.47
	I0603 05:45:51.740328   10844 command_runner.go:130] > +  advertiseAddress: 172.17.95.88
	I0603 05:45:51.740328   10844 command_runner.go:130] >    bindPort: 8443
	I0603 05:45:51.740328   10844 command_runner.go:130] >  bootstrapTokens:
	I0603 05:45:51.740328   10844 command_runner.go:130] >    - groups:
	I0603 05:45:51.740328   10844 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0603 05:45:51.740328   10844 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0603 05:45:51.740328   10844 command_runner.go:130] >    name: "multinode-316400"
	I0603 05:45:51.740328   10844 command_runner.go:130] >    kubeletExtraArgs:
	I0603 05:45:51.740328   10844 command_runner.go:130] > -    node-ip: 172.17.87.47
	I0603 05:45:51.740328   10844 command_runner.go:130] > +    node-ip: 172.17.95.88
	I0603 05:45:51.740328   10844 command_runner.go:130] >    taints: []
	I0603 05:45:51.740328   10844 command_runner.go:130] >  ---
	I0603 05:45:51.740328   10844 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 05:45:51.740328   10844 command_runner.go:130] >  kind: ClusterConfiguration
	I0603 05:45:51.740328   10844 command_runner.go:130] >  apiServer:
	I0603 05:45:51.740328   10844 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.87.47"]
	I0603 05:45:51.740328   10844 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.95.88"]
	I0603 05:45:51.740328   10844 command_runner.go:130] >    extraArgs:
	I0603 05:45:51.740328   10844 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0603 05:45:51.740328   10844 command_runner.go:130] >  controllerManager:
	I0603 05:45:51.740328   10844 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.87.47
	+  advertiseAddress: 172.17.95.88
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-316400"
	   kubeletExtraArgs:
	-    node-ip: 172.17.87.47
	+    node-ip: 172.17.95.88
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.87.47"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.95.88"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0603 05:45:51.740328   10844 kubeadm.go:1154] stopping kube-system containers ...
	I0603 05:45:51.749053   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 05:45:51.785016   10844 command_runner.go:130] > 8280b3904678
	I0603 05:45:51.785016   10844 command_runner.go:130] > f3d3a474bbe6
	I0603 05:45:51.785016   10844 command_runner.go:130] > 4956a24c17e7
	I0603 05:45:51.785016   10844 command_runner.go:130] > d4b4a69fc5b7
	I0603 05:45:51.785016   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:45:51.785016   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:45:51.785016   10844 command_runner.go:130] > 53f366fa802e
	I0603 05:45:51.785016   10844 command_runner.go:130] > 0ab8fbb688df
	I0603 05:45:51.785016   10844 command_runner.go:130] > 29c39ff8468f
	I0603 05:45:51.785016   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:45:51.785016   10844 command_runner.go:130] > 8c884e5bfb96
	I0603 05:45:51.785016   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:45:51.785016   10844 command_runner.go:130] > a24225992b63
	I0603 05:45:51.785016   10844 command_runner.go:130] > bf22fe666154
	I0603 05:45:51.785016   10844 command_runner.go:130] > 77f0d5d979f8
	I0603 05:45:51.785016   10844 command_runner.go:130] > 10b8b906c7ec
	I0603 05:45:51.785016   10844 docker.go:483] Stopping containers: [8280b3904678 f3d3a474bbe6 4956a24c17e7 d4b4a69fc5b7 a00a9dc2a937 ad08c7b8f3af 53f366fa802e 0ab8fbb688df 29c39ff8468f f39be6db7a1f 8c884e5bfb96 3d7dc29a5791 a24225992b63 bf22fe666154 77f0d5d979f8 10b8b906c7ec]
	I0603 05:45:51.794381   10844 ssh_runner.go:195] Run: docker stop 8280b3904678 f3d3a474bbe6 4956a24c17e7 d4b4a69fc5b7 a00a9dc2a937 ad08c7b8f3af 53f366fa802e 0ab8fbb688df 29c39ff8468f f39be6db7a1f 8c884e5bfb96 3d7dc29a5791 a24225992b63 bf22fe666154 77f0d5d979f8 10b8b906c7ec
	I0603 05:45:51.827531   10844 command_runner.go:130] > 8280b3904678
	I0603 05:45:51.828280   10844 command_runner.go:130] > f3d3a474bbe6
	I0603 05:45:51.828280   10844 command_runner.go:130] > 4956a24c17e7
	I0603 05:45:51.828280   10844 command_runner.go:130] > d4b4a69fc5b7
	I0603 05:45:51.828280   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:45:51.828280   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:45:51.828280   10844 command_runner.go:130] > 53f366fa802e
	I0603 05:45:51.828280   10844 command_runner.go:130] > 0ab8fbb688df
	I0603 05:45:51.828280   10844 command_runner.go:130] > 29c39ff8468f
	I0603 05:45:51.828280   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:45:51.828280   10844 command_runner.go:130] > 8c884e5bfb96
	I0603 05:45:51.828280   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:45:51.828280   10844 command_runner.go:130] > a24225992b63
	I0603 05:45:51.828418   10844 command_runner.go:130] > bf22fe666154
	I0603 05:45:51.828418   10844 command_runner.go:130] > 77f0d5d979f8
	I0603 05:45:51.828418   10844 command_runner.go:130] > 10b8b906c7ec
	I0603 05:45:51.840536   10844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 05:45:51.880369   10844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 05:45:51.899992   10844 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 05:45:51.899992   10844 kubeadm.go:156] found existing configuration files:
	
	I0603 05:45:51.912770   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 05:45:51.929630   10844 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 05:45:51.930696   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 05:45:51.943454   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 05:45:51.974548   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 05:45:51.992275   10844 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 05:45:51.992973   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 05:45:52.007941   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 05:45:52.037288   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 05:45:52.055441   10844 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 05:45:52.056030   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 05:45:52.069087   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 05:45:52.102530   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 05:45:52.122535   10844 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 05:45:52.122535   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 05:45:52.133517   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 05:45:52.162517   10844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 05:45:52.181471   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 05:45:52.462567   10844 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 05:45:52.462567   10844 command_runner.go:130] > [certs] Using the existing "sa" key
	I0603 05:45:52.462567   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.286276   10844 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 05:45:54.286276   10844 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 05:45:54.286276   10844 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 05:45:54.286388   10844 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 05:45:54.286388   10844 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 05:45:54.286388   10844 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 05:45:54.286423   10844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.8237674s)
	I0603 05:45:54.286423   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.598569   10844 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:45:54.598909   10844 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:45:54.598909   10844 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 05:45:54.599126   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.706106   10844 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 05:45:54.706179   10844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 05:45:54.706218   10844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 05:45:54.706218   10844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 05:45:54.706218   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.810667   10844 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 05:45:54.810977   10844 api_server.go:52] waiting for apiserver process to appear ...
	I0603 05:45:54.823668   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:55.325904   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:55.836774   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:56.332919   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:56.837488   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:56.862164   10844 command_runner.go:130] > 1862
	I0603 05:45:56.862164   10844 api_server.go:72] duration metric: took 2.0512122s to wait for apiserver process to appear ...
	I0603 05:45:56.862164   10844 api_server.go:88] waiting for apiserver healthz status ...
	I0603 05:45:56.862164   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.344153   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 05:46:00.344153   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 05:46:00.344153   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.501412   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:00.501412   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:00.501412   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.513517   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:00.513517   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:00.868650   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.876085   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:00.876085   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:01.373507   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:01.384528   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:01.384528   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:01.862832   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:01.870640   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 200:
	ok
	I0603 05:46:01.871403   10844 round_trippers.go:463] GET https://172.17.95.88:8443/version
	I0603 05:46:01.871403   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:01.871403   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:01.871403   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:01.881771   10844 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 05:46:01.881771   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:01.881771   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Content-Length: 263
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:01 GMT
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Audit-Id: a5bab7d6-bece-41de-960c-f7ef97b8b6e4
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:01.881771   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:01.881771   10844 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 05:46:01.881771   10844 api_server.go:141] control plane version: v1.30.1
	I0603 05:46:01.881771   10844 api_server.go:131] duration metric: took 5.0195889s to wait for apiserver health ...
	I0603 05:46:01.881771   10844 cni.go:84] Creating CNI manager for ""
	I0603 05:46:01.881771   10844 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 05:46:01.891146   10844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 05:46:01.910851   10844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 05:46:01.918344   10844 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0603 05:46:01.918344   10844 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0603 05:46:01.918415   10844 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0603 05:46:01.918415   10844 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 05:46:01.918415   10844 command_runner.go:130] > Access: 2024-06-03 12:44:25.864397100 +0000
	I0603 05:46:01.918415   10844 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0603 05:46:01.918415   10844 command_runner.go:130] > Change: 2024-06-03 12:44:13.868000000 +0000
	I0603 05:46:01.918497   10844 command_runner.go:130] >  Birth: -
	I0603 05:46:01.918497   10844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 05:46:01.918497   10844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 05:46:02.035951   10844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 05:46:03.149554   10844 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0603 05:46:03.149708   10844 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0603 05:46:03.149708   10844 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0603 05:46:03.149708   10844 command_runner.go:130] > daemonset.apps/kindnet configured
	I0603 05:46:03.149708   10844 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1137532s)
	I0603 05:46:03.149708   10844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 05:46:03.149708   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:46:03.149708   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.149708   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.149708   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.159576   10844 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:46:03.159576   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.159576   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.159576   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Audit-Id: 6654eba0-33f3-43a7-9055-36db84aa15f8
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.162263   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85352 chars]
	I0603 05:46:03.168732   10844 system_pods.go:59] 12 kube-system pods found
	I0603 05:46:03.168732   10844 system_pods.go:61] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 05:46:03.168732   10844 system_pods.go:61] "etcd-multinode-316400" [8509d96a-4449-4656-8237-d194d2980506] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kindnet-2g66r" [3e88e85f-e61e-427f-944a-97b0ba90d219] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kindnet-789v5" [d3147209-4266-4963-a4a6-05a024412c7b] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-apiserver-multinode-316400" [1c07a75f-fb00-4529-a699-378974ce494b] Pending
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-proxy-dl97g" [78665ab7-c6dd-4381-8b29-75df4d31eff1] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-proxy-z26hc" [983da576-c697-4bdd-8908-93ec5b571787] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 05:46:03.168732   10844 system_pods.go:61] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 05:46:03.168732   10844 system_pods.go:74] duration metric: took 19.0235ms to wait for pod list to return data ...
	I0603 05:46:03.168732   10844 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:46:03.168732   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes
	I0603 05:46:03.168732   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.168732   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.168732   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.174802   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:03.174802   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.174802   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.174802   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Audit-Id: 9cfdf364-5833-4bf2-93d4-ada17267ae46
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.174802   10844 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15626 chars]
	I0603 05:46:03.177147   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:46:03.177197   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:46:03.177242   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:46:03.177242   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:46:03.177278   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:46:03.177278   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:46:03.177278   10844 node_conditions.go:105] duration metric: took 8.5464ms to run NodePressure ...
	I0603 05:46:03.177319   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:46:03.600558   10844 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 05:46:03.600558   10844 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 05:46:03.600642   10844 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 05:46:03.600642   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0603 05:46:03.600642   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.600642   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.600642   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.604401   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:03.604401   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.604401   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.604401   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Audit-Id: 41adc2f2-1d4b-4f2d-b4ba-0f9dc7981541
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.605937   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1754"},"items":[{"metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1736","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30501 chars]
	I0603 05:46:03.607356   10844 kubeadm.go:733] kubelet initialised
	I0603 05:46:03.607356   10844 kubeadm.go:734] duration metric: took 6.7139ms waiting for restarted kubelet to initialise ...
	I0603 05:46:03.607356   10844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:46:03.607356   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:46:03.607356   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.607356   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.607356   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.616366   10844 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:46:03.616366   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.616366   10844 round_trippers.go:580]     Audit-Id: 711df3df-3d4b-44bd-959b-438fd3cb4bdc
	I0603 05:46:03.616366   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.617383   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.617383   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.617383   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.617383   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.619159   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1754"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87069 chars]
	I0603 05:46:03.622766   10844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.622766   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:03.622766   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.622766   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.622766   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.625427   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.625836   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.625836   10844 round_trippers.go:580]     Audit-Id: 97b4e11c-3bfe-4a29-9bec-867b105c6afa
	I0603 05:46:03.625836   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.625836   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.625836   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.625836   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.625894   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.626050   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:03.626834   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.626834   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.626834   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.626906   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.628933   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.629290   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.629290   10844 round_trippers.go:580]     Audit-Id: e75c020c-40ff-433c-b1ab-e6227fca65f3
	I0603 05:46:03.629290   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.629290   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.629290   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.629363   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.629363   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.629789   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.629936   10844 pod_ready.go:97] node "multinode-316400" hosting pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.629936   10844 pod_ready.go:81] duration metric: took 7.1697ms for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.629936   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.629936   10844 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.629936   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:46:03.630474   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.630474   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.630474   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.634532   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:03.635012   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Audit-Id: 46d58ecb-4e01-412e-b1a4-f4d76d3d2558
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.635012   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.635012   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.635287   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1736","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6373 chars]
	I0603 05:46:03.635833   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.635897   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.635897   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.635897   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.638534   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.638953   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.638953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.638953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Audit-Id: 76e0d4d8-6f8b-49fb-961b-e456964ba094
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.639112   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.639823   10844 pod_ready.go:97] node "multinode-316400" hosting pod "etcd-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.639823   10844 pod_ready.go:81] duration metric: took 9.8864ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.639823   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "etcd-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.639823   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.640061   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:46:03.640085   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.640112   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.640112   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.643083   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.643083   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.643083   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.643174   10844 round_trippers.go:580]     Audit-Id: 8a87c0f9-f18b-477a-a83d-81e5ef4078a6
	I0603 05:46:03.643174   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.643174   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.643174   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.643174   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.643263   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"1c07a75f-fb00-4529-a699-378974ce494b","resourceVersion":"1749","creationTimestamp":"2024-06-03T12:46:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.95.88:8443","kubernetes.io/config.hash":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.mirror":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.seen":"2024-06-03T12:45:54.794125775Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7929 chars]
	I0603 05:46:03.644003   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.644003   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.644003   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.644003   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.646708   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.646708   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.646708   10844 round_trippers.go:580]     Audit-Id: 4d2547f7-17af-4a3e-8365-c026b24030fb
	I0603 05:46:03.647156   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.647156   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.647156   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.647156   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.647156   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.647373   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.647569   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-apiserver-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.647569   10844 pod_ready.go:81] duration metric: took 7.65ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.647569   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-apiserver-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.647569   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.647569   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:46:03.647569   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.647569   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.647569   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.650340   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.650340   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.650340   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.650340   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Audit-Id: f16c7884-0a9c-4f8e-9b8b-ab886bcc7161
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.650340   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"1732","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0603 05:46:03.652028   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.652028   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.652028   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.652028   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.657730   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:03.657730   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.657802   10844 round_trippers.go:580]     Audit-Id: 656a53b8-0eb5-4880-9f75-21747b13027c
	I0603 05:46:03.657802   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.657833   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.657833   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.657868   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.657868   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.657900   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.658742   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-controller-manager-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.658810   10844 pod_ready.go:81] duration metric: took 11.2402ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.658860   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-controller-manager-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.658860   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.806608   10844 request.go:629] Waited for 147.5072ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:46:03.806893   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:46:03.806893   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.806893   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.806893   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.812233   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:03.812233   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.812483   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Audit-Id: 3da3227d-8c65-448c-bf45-e5b417278c40
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.812483   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.812602   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dl97g","generateName":"kube-proxy-","namespace":"kube-system","uid":"78665ab7-c6dd-4381-8b29-75df4d31eff1","resourceVersion":"1713","creationTimestamp":"2024-06-03T12:30:58Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:30:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0603 05:46:04.008741   10844 request.go:629] Waited for 195.2613ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:46:04.009107   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:46:04.009107   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.009107   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.009107   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.013465   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:04.014327   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.014327   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.014327   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Audit-Id: 52675769-063f-4a47-a5cb-51e5e80a6124
	I0603 05:46:04.014619   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m03","uid":"39dbcb4e-fdeb-4463-8bde-9cfa6cead308","resourceVersion":"1720","creationTimestamp":"2024-06-03T12:41:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_41_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:41:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0603 05:46:04.014619   10844 pod_ready.go:97] node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:46:04.015172   10844 pod_ready.go:81] duration metric: took 356.2769ms for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:04.015172   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:46:04.015172   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:04.210170   10844 request.go:629] Waited for 194.5553ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:46:04.210289   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:46:04.210289   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.210289   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.210289   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.213666   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:04.214308   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.214308   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.214308   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.214419   10844 round_trippers.go:580]     Audit-Id: 1ec071ec-56bb-4634-81ca-b3fe83687730
	I0603 05:46:04.214419   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.214419   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.214419   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.215978   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"1752","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0603 05:46:04.413378   10844 request.go:629] Waited for 196.3724ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:04.413597   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:04.413597   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.413597   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.413597   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.419302   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:04.419302   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.419302   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.419302   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Audit-Id: 2f7c0a3a-f297-4f31-b59c-6c07514a7363
	I0603 05:46:04.419866   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:04.420232   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-proxy-ks64x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:04.420232   10844 pod_ready.go:81] duration metric: took 405.0585ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:04.420232   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-proxy-ks64x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:04.420232   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:04.601586   10844 request.go:629] Waited for 181.1549ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:46:04.601844   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:46:04.601844   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.601844   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.601844   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.606085   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:04.606085   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.606085   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.606085   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.606085   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.606085   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.606085   10844 round_trippers.go:580]     Audit-Id: c2d24a1b-1652-4f33-8a8b-3ecfd4337c26
	I0603 05:46:04.606167   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.606465   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"609","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0603 05:46:04.805770   10844 request.go:629] Waited for 198.2626ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:46:04.806179   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:46:04.806179   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.806179   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.806179   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.809996   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:04.809996   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.809996   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.809996   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.810193   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.810193   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.810193   10844 round_trippers.go:580]     Audit-Id: 67ca5c7b-a8de-4ab8-b6ca-57a125a2f43b
	I0603 05:46:04.810193   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.810398   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"1676","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3826 chars]
	I0603 05:46:04.810499   10844 pod_ready.go:92] pod "kube-proxy-z26hc" in "kube-system" namespace has status "Ready":"True"
	I0603 05:46:04.810499   10844 pod_ready.go:81] duration metric: took 390.2665ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:04.810499   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:05.009623   10844 request.go:629] Waited for 198.1488ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:46:05.009823   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:46:05.009823   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:05.009823   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:05.009885   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:05.013633   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:05.013633   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:05.013980   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:05 GMT
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Audit-Id: f7be52fb-b8db-435d-8c0c-5fb7106ea4da
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:05.013980   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:05.014213   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"1734","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0603 05:46:05.214723   10844 request.go:629] Waited for 199.4584ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:05.214932   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:05.214932   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:05.214932   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:05.214932   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:05.219400   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:05.219400   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:05 GMT
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Audit-Id: 8d2759a0-d182-4caf-8eec-cbe277482d91
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:05.219400   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:05.219400   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:05.219400   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:05.220353   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-scheduler-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:05.220414   10844 pod_ready.go:81] duration metric: took 409.9133ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:05.220414   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-scheduler-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:05.220414   10844 pod_ready.go:38] duration metric: took 1.6130522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:46:05.220474   10844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 05:46:05.242018   10844 command_runner.go:130] > -16
	I0603 05:46:05.242109   10844 ops.go:34] apiserver oom_adj: -16
	I0603 05:46:05.242109   10844 kubeadm.go:591] duration metric: took 13.583325s to restartPrimaryControlPlane
	I0603 05:46:05.242109   10844 kubeadm.go:393] duration metric: took 13.6486418s to StartCluster
	I0603 05:46:05.242109   10844 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:46:05.242109   10844 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:46:05.243914   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:46:05.245415   10844 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 05:46:05.248790   10844 out.go:177] * Verifying Kubernetes components...
	I0603 05:46:05.245587   10844 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 05:46:05.245697   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:46:05.256738   10844 out.go:177] * Enabled addons: 
	I0603 05:46:05.259080   10844 addons.go:510] duration metric: took 13.4927ms for enable addons: enabled=[]
	I0603 05:46:05.267034   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:46:05.532765   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:46:05.562711   10844 node_ready.go:35] waiting up to 6m0s for node "multinode-316400" to be "Ready" ...
	I0603 05:46:05.562796   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:05.562796   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:05.562796   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:05.562796   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:05.567381   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:05.567381   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Audit-Id: 7e2a5c7f-e003-4914-9d7d-581639571f34
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:05.567381   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:05.567381   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:05 GMT
	I0603 05:46:05.567381   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:06.074643   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:06.074692   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:06.074692   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:06.074692   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:06.079283   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:06.079345   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:06 GMT
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Audit-Id: 4adcb52e-20ee-4162-8284-a92b99c18ab2
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:06.079345   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:06.079345   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:06.080318   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:06.577330   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:06.577330   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:06.577330   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:06.577330   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:06.584367   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:06.584367   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:06.584447   10844 round_trippers.go:580]     Audit-Id: 3b299e94-176f-4180-a779-18102d14fe10
	I0603 05:46:06.584465   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:06.584465   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:06.584465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:06.584465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:06.584492   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:06 GMT
	I0603 05:46:06.584492   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:07.068441   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:07.068517   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:07.068517   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:07.068517   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:07.073023   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:07.073023   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Audit-Id: 4c5fb513-144f-4dd5-8552-478d817d21b4
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:07.073023   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:07.073023   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:07 GMT
	I0603 05:46:07.074000   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:07.577364   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:07.577428   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:07.577428   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:07.577428   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:07.581963   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:07.581963   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:07.581963   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:07.581963   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:07.581963   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:07.581963   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:07 GMT
	I0603 05:46:07.581963   10844 round_trippers.go:580]     Audit-Id: e4e16eaf-1f14-4ab9-9d35-3ffe7e0bd927
	I0603 05:46:07.582637   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:07.582817   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:07.583131   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:08.078773   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:08.078773   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:08.078887   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:08.078887   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:08.082818   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:08.083175   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:08.083175   10844 round_trippers.go:580]     Audit-Id: 05c7671c-7cd1-46c5-a164-e25a1f5c631e
	I0603 05:46:08.083175   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:08.083175   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:08.083265   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:08.083265   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:08.083265   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:08 GMT
	I0603 05:46:08.083772   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:08.576841   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:08.576916   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:08.576916   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:08.576916   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:08.581206   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:08.581652   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:08.581652   10844 round_trippers.go:580]     Audit-Id: b4edd00e-0f89-4f66-8e3e-fc74abc2604d
	I0603 05:46:08.581652   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:08.581652   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:08.581652   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:08.581652   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:08.581752   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:08 GMT
	I0603 05:46:08.581999   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:09.071957   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:09.071957   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:09.071957   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:09.071957   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:09.074540   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:09.074540   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:09.075469   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:09.075469   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:09.075469   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:09.075469   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:09.075589   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:09 GMT
	I0603 05:46:09.075589   10844 round_trippers.go:580]     Audit-Id: 8fc7f7bf-3b36-4c58-b6a1-661a52e71393
	I0603 05:46:09.076023   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:09.573744   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:09.573828   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:09.573828   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:09.573914   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:09.578011   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:09.578011   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:09.578101   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:09.578101   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:09 GMT
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Audit-Id: 2be9aa31-65a2-4968-ad39-ac28e016d90f
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:09.578301   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:10.071366   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:10.071563   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:10.071563   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:10.071563   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:10.083357   10844 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0603 05:46:10.083791   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:10.083791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:10.083791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:10 GMT
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Audit-Id: b3c237d8-b16d-48b5-9a3d-47a314a0aa94
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:10.083989   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:10.084704   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:10.570200   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:10.570317   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:10.570317   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:10.570317   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:10.574521   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:10.574521   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:10.574521   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:10.574521   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:10 GMT
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Audit-Id: 47a858a9-2baf-4a00-82b8-953bf127f2b7
	I0603 05:46:10.574521   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:11.070062   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:11.070062   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:11.070062   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:11.070062   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:11.075195   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:11.075195   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:11.075195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:11 GMT
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Audit-Id: 9b176999-6cab-496c-97a5-f1d75bd80f83
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:11.075195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:11.075195   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:11.569387   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:11.569387   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:11.569387   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:11.569387   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:11.572978   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:11.573840   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:11.573840   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:11 GMT
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Audit-Id: 6b287fc1-376a-4e53-87a5-a686649f32ba
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:11.573840   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:11.574061   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:12.066027   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:12.066371   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:12.066371   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:12.066371   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:12.069983   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:12.070161   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:12.070161   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:12.070161   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:12 GMT
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Audit-Id: e5a460a5-afb2-42be-b8e3-7e1a20f7f7da
	I0603 05:46:12.070335   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:12.569196   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:12.569196   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:12.569524   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:12.569524   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:12.572881   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:12.572881   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:12.572881   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:12.572881   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:12 GMT
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Audit-Id: d74734b3-e0c6-45c0-94f5-002662ec6e85
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:12.572881   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:12.574486   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:13.079064   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:13.079064   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:13.079064   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:13.079064   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:13.082104   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:13.082104   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:13.082104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:13.082104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:13 GMT
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Audit-Id: 107bf1cd-327a-4245-b5af-779380b9e0f4
	I0603 05:46:13.082104   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1840","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0603 05:46:13.568103   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:13.568103   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:13.568103   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:13.568103   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:13.571672   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:13.571913   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Audit-Id: f71c7a58-c235-49bb-b897-30b32d67dd2f
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:13.571913   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:13.571913   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:13 GMT
	I0603 05:46:13.572032   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:14.070100   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:14.070100   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:14.070256   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:14.070256   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:14.075036   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:14.075036   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:14.075036   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:14.075036   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:14 GMT
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Audit-Id: 6b22128b-93df-425c-b69c-83ccba85229b
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:14.075695   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:14.570114   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:14.570189   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:14.570189   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:14.570298   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:14.574290   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:14.574290   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:14.574290   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:14.574290   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:14.574290   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:14.574290   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:14 GMT
	I0603 05:46:14.574290   10844 round_trippers.go:580]     Audit-Id: ac83ad67-45a3-4df0-8ed1-78c3cf0d1193
	I0603 05:46:14.575133   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:14.576346   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:14.577079   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:15.070465   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:15.070465   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:15.070465   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:15.070465   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:15.075081   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:15.075159   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:15.075159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:15.075159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:15.075159   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:15 GMT
	I0603 05:46:15.075249   10844 round_trippers.go:580]     Audit-Id: 0e6f0b3f-3e1c-479f-a577-2e66f78bce92
	I0603 05:46:15.075249   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:15.075249   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:15.076042   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:15.571590   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:15.571590   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:15.571590   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:15.571590   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:15.576154   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:15.576569   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:15.576700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:15.576700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:15 GMT
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Audit-Id: ac2f00ad-42e3-423c-856f-b3cae204d6ee
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:15.576942   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:16.070883   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:16.071037   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:16.071037   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:16.071037   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:16.074729   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:16.074820   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:16.074820   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:16.074820   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:16.074820   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:16.074888   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:16 GMT
	I0603 05:46:16.074888   10844 round_trippers.go:580]     Audit-Id: dc7f4f08-c8fd-486a-bafe-d8b154b85c93
	I0603 05:46:16.074888   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:16.074914   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:16.568347   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:16.568409   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:16.568409   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:16.568409   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:16.583832   10844 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 05:46:16.583832   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:16 GMT
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Audit-Id: 3a5afa14-1218-4de9-8aa2-7c8f3ef9a5b3
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:16.583832   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:16.583832   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:16.584869   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:16.585890   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:17.069126   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:17.069371   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:17.069371   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:17.069371   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:17.073216   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:17.074235   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Audit-Id: 4ff4cb46-c2d9-4ac8-afe2-ee491e15edb1
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:17.074268   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:17.074268   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:17 GMT
	I0603 05:46:17.074404   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:17.567851   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:17.567851   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:17.567851   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:17.567851   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:17.571459   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:17.572203   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:17 GMT
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Audit-Id: b9bdccbc-7de3-41d1-8655-a420ca08653c
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:17.572203   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:17.572203   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:17.572203   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:18.065911   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:18.065911   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:18.065911   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:18.065911   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:18.069487   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:18.069487   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:18 GMT
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Audit-Id: 67e5227d-dcb4-43e6-b25a-897b79f42137
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:18.070298   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:18.070298   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:18.070462   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:18.565739   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:18.565793   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:18.565793   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:18.565793   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:18.570357   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:18.570737   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:18.570737   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:18.570737   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:18 GMT
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Audit-Id: 61bb0a33-31fd-4a1a-9e61-a0bb097ee8a1
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:18.571127   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:19.065584   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:19.065584   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:19.065711   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:19.065711   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:19.071741   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:19.071741   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:19.071741   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:19.071741   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:19 GMT
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Audit-Id: bf2d77e7-351e-421c-b07e-ede7d88cd4e1
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:19.072665   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:19.072665   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:19.576811   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:19.577060   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:19.577060   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:19.577060   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:19.580428   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:19.581433   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Audit-Id: bbee6213-6d48-4de1-904b-1f2bb2d1d301
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:19.581433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:19.581433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:19 GMT
	I0603 05:46:19.582292   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:20.075910   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:20.075910   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:20.075910   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:20.075910   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:20.081097   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:20.081097   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:20.081097   10844 round_trippers.go:580]     Audit-Id: b0b54a45-379a-4c6a-8e4f-778e74972f17
	I0603 05:46:20.081097   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:20.081097   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:20.081263   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:20.081263   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:20.081263   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:20 GMT
	I0603 05:46:20.081575   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:20.575599   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:20.575807   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:20.575807   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:20.575807   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:20.580445   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:20.580748   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Audit-Id: 89ce28af-65d4-421e-9769-b9b912529747
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:20.580748   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:20.580748   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:20 GMT
	I0603 05:46:20.581007   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:21.076001   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:21.076001   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:21.076001   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:21.076001   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:21.080618   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:21.080618   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:21.080788   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:21.080788   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:21 GMT
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Audit-Id: ea74ae3a-2bb4-4e64-a02a-736c4771d45c
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:21.081081   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:21.081731   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:21.577892   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:21.577892   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:21.577892   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:21.577892   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:21.582493   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:21.582822   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:21.582822   10844 round_trippers.go:580]     Audit-Id: 5fe8c26c-adc6-4506-a64f-89f7b9dd2651
	I0603 05:46:21.582822   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:21.582822   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:21.582822   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:21.582822   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:21.582916   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:21 GMT
	I0603 05:46:21.583116   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:22.078395   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:22.078395   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:22.078395   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:22.078395   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:22.084939   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:22.084939   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:22.085024   10844 round_trippers.go:580]     Audit-Id: cd385bef-d152-40c2-ad35-b19185cb0741
	I0603 05:46:22.085024   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:22.085081   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:22.085103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:22.085103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:22.085103   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:22 GMT
	I0603 05:46:22.085103   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:22.578126   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:22.578126   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:22.578223   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:22.578223   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:22.582030   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:22.583264   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:22.583264   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:22.583264   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:22 GMT
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Audit-Id: 2dc6c365-a588-478b-af58-f1f4e01df756
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:22.583561   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:23.077114   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:23.077114   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:23.077114   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:23.077114   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:23.081800   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:23.081861   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Audit-Id: 2e0145cc-ae20-44a8-abf4-79d00fde2c68
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:23.081861   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:23.081861   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:23 GMT
	I0603 05:46:23.082466   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:23.083109   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:23.575741   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:23.575741   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:23.576028   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:23.576028   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:23.580351   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:23.580351   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:23.580351   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:23 GMT
	I0603 05:46:23.580351   10844 round_trippers.go:580]     Audit-Id: 0a0d481c-34b4-4894-93a2-b466f6d64d14
	I0603 05:46:23.580351   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:23.580814   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:23.580814   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:23.580814   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:23.581667   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:24.073667   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:24.073667   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:24.073667   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:24.073667   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:24.077226   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:24.078230   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:24.078230   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:24.078230   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:24 GMT
	I0603 05:46:24.078230   10844 round_trippers.go:580]     Audit-Id: 526bfbed-8787-40b4-a45f-ddd6e3037735
	I0603 05:46:24.078230   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:24.078337   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:24.078337   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:24.079237   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:24.573646   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:24.573826   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:24.573826   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:24.573826   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:24.577479   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:24.577479   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:24.577479   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:24.577479   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:24 GMT
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Audit-Id: 18bad2a6-97eb-4f1e-8654-2dcb107fc991
	I0603 05:46:24.578796   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:25.075565   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:25.075565   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:25.075565   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:25.075565   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:25.082159   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:25.082159   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:25.082159   10844 round_trippers.go:580]     Audit-Id: 5c1c0c7e-0f37-4c6d-97eb-91bafae935b6
	I0603 05:46:25.082159   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:25.082159   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:25.082159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:25.082159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:25.082511   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:25 GMT
	I0603 05:46:25.083181   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:25.579199   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:25.579199   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:25.579199   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:25.579199   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:25.583802   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:25.583953   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:25 GMT
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Audit-Id: b49e5177-6df8-4437-9e5f-dae8488ceb0a
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:25.583953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:25.583953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:25.584438   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:25.585104   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:26.067574   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:26.067629   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:26.067695   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:26.067695   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:26.070160   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:26.070160   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:26.070160   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:26.070160   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:26 GMT
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Audit-Id: c498fb7b-1ec7-4163-9fa4-8791b74dcb94
	I0603 05:46:26.070160   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:26.567460   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:26.567598   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:26.567598   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:26.567598   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:26.571673   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:26.571673   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:26.571778   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:26 GMT
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Audit-Id: d50127b8-d425-4711-a59e-31c71c173b3f
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:26.571778   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:26.571952   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:27.066403   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:27.066403   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:27.066403   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:27.066403   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:27.070058   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:27.070403   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:27.070403   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:27.070490   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:27.070490   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:27.070490   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:27.070490   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:27 GMT
	I0603 05:46:27.070490   10844 round_trippers.go:580]     Audit-Id: 9ad1701f-1873-439d-b7aa-30d831faf859
	I0603 05:46:27.070490   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:27.568255   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:27.568255   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:27.568255   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:27.568255   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:27.572870   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:27.572870   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:27 GMT
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Audit-Id: 1bbdaa8e-dc6f-4fd7-a4c0-87e43b385069
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:27.573791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:27.573791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:27.573983   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:28.067438   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:28.067628   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:28.067628   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:28.067628   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:28.070996   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:28.071538   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:28.071538   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:28 GMT
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Audit-Id: 783b6c64-20f6-4b28-a7b4-9650b7d9822a
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:28.071538   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:28.072320   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:28.072578   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:28.566384   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:28.566384   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:28.566384   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:28.566384   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:28.570732   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:28.570732   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Audit-Id: 490896be-5ac1-4ec2-9bcb-da70d04c90dc
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:28.570732   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:28.570732   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:28 GMT
	I0603 05:46:28.570732   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:29.064639   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:29.064845   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:29.064845   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:29.064845   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:29.068499   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:29.069320   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:29.069320   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:29.069320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:29.069320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:29.069320   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:29 GMT
	I0603 05:46:29.069409   10844 round_trippers.go:580]     Audit-Id: 6211cb6b-f8e0-42a5-bd89-510fdcda5d1f
	I0603 05:46:29.069409   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:29.069836   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:29.578833   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:29.579067   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:29.579067   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:29.579067   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:29.587632   10844 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:46:29.587632   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Audit-Id: 4b2c084f-c84b-40fd-9d86-032803f81980
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:29.587632   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:29.587632   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:29 GMT
	I0603 05:46:29.587632   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:30.074571   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:30.074727   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:30.074727   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:30.074727   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:30.078394   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:30.079315   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:30.079315   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:30 GMT
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Audit-Id: 8e95ebe1-32fd-4549-a9d6-5f81a10fe8d1
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:30.079315   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:30.079691   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:30.080076   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:30.574878   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:30.574973   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:30.574973   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:30.574973   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:30.578776   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:30.578776   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Audit-Id: 87ed4985-acc0-48a0-a112-aac2d51a953e
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:30.579433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:30.579433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:30 GMT
	I0603 05:46:30.579677   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:31.063582   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:31.063582   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:31.063582   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:31.064007   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:31.067873   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:31.067924   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:31.067924   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:31 GMT
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Audit-Id: 62e24bde-a036-46a1-8346-e6d6b311c053
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:31.067924   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:31.067924   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:31.563651   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:31.563651   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:31.563651   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:31.563651   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:31.567313   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:31.568228   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:31.568228   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:31 GMT
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Audit-Id: 7b28854b-7320-46d3-ac7c-bdaf60c86c7c
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:31.568313   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:31.568410   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:32.065940   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:32.066010   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:32.066010   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:32.066010   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:32.070454   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:32.070454   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:32.070454   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:32.070454   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:32.070454   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:32 GMT
	I0603 05:46:32.070454   10844 round_trippers.go:580]     Audit-Id: f81f4c25-2293-45db-8b5d-32782581d530
	I0603 05:46:32.070552   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:32.070552   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:32.070806   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:32.566344   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:32.566435   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:32.566435   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:32.566435   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:32.569840   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:32.570779   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:32.570829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:32.570829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:32 GMT
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Audit-Id: e752cc27-7f11-47e9-ab87-7ef3b27e7b3b
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:32.570829   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:32.571447   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:33.070475   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:33.070475   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:33.070475   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:33.070555   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:33.074452   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:33.075226   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:33.075226   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:33.075226   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:33.075226   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:33 GMT
	I0603 05:46:33.075226   10844 round_trippers.go:580]     Audit-Id: 66bd618f-5e82-471e-8898-c94a374d0d7c
	I0603 05:46:33.075285   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:33.075285   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:33.075285   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:33.567934   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:33.567934   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:33.567934   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:33.567934   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:33.572289   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:33.572289   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:33.572289   10844 round_trippers.go:580]     Audit-Id: 5a770c06-e325-4b54-84d9-86ed273ace5b
	I0603 05:46:33.572524   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:33.572524   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:33.572524   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:33.572524   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:33.572524   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:33 GMT
	I0603 05:46:33.572643   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:34.070177   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:34.070177   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:34.070177   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:34.070177   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:34.075281   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:34.075281   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:34.075281   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:34 GMT
	I0603 05:46:34.075372   10844 round_trippers.go:580]     Audit-Id: 81138375-de80-4782-8b76-6f36480d0fbd
	I0603 05:46:34.075372   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:34.075372   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:34.075372   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:34.075372   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:34.075840   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:34.568295   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:34.568295   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:34.568295   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:34.568295   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:34.574232   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:34.574232   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:34.574302   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:34.574326   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:34.574326   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:34 GMT
	I0603 05:46:34.574354   10844 round_trippers.go:580]     Audit-Id: c5302c01-9acd-42d6-a5d0-7d94359e5a21
	I0603 05:46:34.574354   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:34.574354   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:34.574883   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:34.575126   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:35.072163   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:35.072163   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:35.072163   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:35.072163   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:35.076002   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:35.076525   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:35.076525   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:35.076525   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:35.076525   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:35.076525   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:35 GMT
	I0603 05:46:35.076585   10844 round_trippers.go:580]     Audit-Id: 47125028-a6f6-4006-81b0-669c128bb885
	I0603 05:46:35.076585   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:35.076585   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:35.570924   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:35.571032   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:35.571032   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:35.571032   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:35.574720   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:35.575542   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:35.575542   10844 round_trippers.go:580]     Audit-Id: e0a91b25-751c-4d83-b7c6-2cae33cd48ca
	I0603 05:46:35.575616   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:35.575616   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:35.575616   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:35.575616   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:35.575616   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:35 GMT
	I0603 05:46:35.575616   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:36.068877   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:36.068978   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:36.068978   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:36.068978   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:36.071960   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:36.071960   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Audit-Id: 8ef66d3a-f616-41d7-914d-bb314100956f
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:36.071960   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:36.071960   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:36 GMT
	I0603 05:46:36.072910   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:36.567342   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:36.567342   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:36.567342   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:36.567342   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:36.571089   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:36.571089   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:36.571378   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:36.571378   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:36 GMT
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Audit-Id: 65d463e1-73ba-49f4-a6f4-de645f6dbcff
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:36.571690   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:37.067666   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:37.067740   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:37.067740   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:37.067740   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:37.071536   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:37.071987   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Audit-Id: 46739dcc-701d-4c3c-9c49-db76061f796c
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:37.071987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:37.071987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:37 GMT
	I0603 05:46:37.072456   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:37.072953   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:37.568495   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:37.568495   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:37.568495   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:37.568495   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:37.573122   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:37.573210   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:37.573210   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:37.573210   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:37.573210   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:37 GMT
	I0603 05:46:37.573323   10844 round_trippers.go:580]     Audit-Id: 339cba3f-9192-485a-bd19-c4e2b6aecbc4
	I0603 05:46:37.573323   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:37.573323   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:37.573468   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:38.067970   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:38.067970   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:38.067970   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:38.067970   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:38.071756   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:38.072739   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:38.072739   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Audit-Id: a1577ff5-a08c-41cd-8a52-cbea27e548e7
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:38.072739   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:38.073047   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:38.566184   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:38.566184   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:38.566184   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:38.566184   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:38.570579   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:38.570579   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:38.570579   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:38.570579   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Audit-Id: 2dbb8271-d0e2-4bd3-9e51-88a9aa5dbf9a
	I0603 05:46:38.570579   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:39.066774   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:39.066774   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:39.066774   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:39.066774   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:39.072360   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:39.072421   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Audit-Id: 1d65e360-1b71-458b-aa79-1993565c0c86
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:39.072421   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:39.072421   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 05:46:39.072805   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:39.073293   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:39.568445   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:39.568445   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:39.568445   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:39.568445   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:39.573045   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:39.573462   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Audit-Id: 1c2891d0-e198-45f4-88bf-c34204b35d91
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:39.573462   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:39.573462   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:39.574036   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:40.069965   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:40.070145   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:40.070145   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:40.070145   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:40.074896   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:40.074896   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:40.075566   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:40.075566   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Audit-Id: 279a2e19-d355-49d9-b371-e1837036748e
	I0603 05:46:40.075623   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:40.563817   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:40.563817   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:40.563898   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:40.563898   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:40.567202   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:40.567202   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:40.567202   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:40.567202   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:40.567202   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 05:46:40.567202   10844 round_trippers.go:580]     Audit-Id: f8a24f93-e404-4eb4-b0b4-d135d40a7083
	I0603 05:46:40.567923   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:40.567923   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:40.567992   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:41.066331   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:41.066331   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.066331   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.066331   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.069962   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:41.070860   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.070860   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.070860   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Audit-Id: deb0aff3-0585-46da-8c84-8d1e31951688
	I0603 05:46:41.071143   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:41.566381   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:41.566460   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.566460   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.566460   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.570868   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:41.570868   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Audit-Id: 0800d69b-66d4-4dce-b880-d5a1d269f949
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.570868   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.570868   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.571004   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:41.571550   10844 node_ready.go:49] node "multinode-316400" has status "Ready":"True"
	I0603 05:46:41.571727   10844 node_ready.go:38] duration metric: took 36.0086201s for node "multinode-316400" to be "Ready" ...
	I0603 05:46:41.571727   10844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:46:41.571846   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:46:41.571892   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.571892   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.571892   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.579805   10844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:46:41.579805   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Audit-Id: 583c8ed6-c5b8-4236-b5a4-dc159faa73b6
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.579805   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.579805   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.581748   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1890"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86508 chars]
	I0603 05:46:41.586176   10844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:41.586369   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:41.586369   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.586369   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.586369   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.592228   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:41.593103   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.593103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.593103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Audit-Id: af5eec5c-8c5c-4ff5-bbf5-27318c458233
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.593273   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:41.593893   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:41.593893   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.593974   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.593974   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.596240   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:41.596240   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.596240   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.596240   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.596240   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.596240   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.597120   10844 round_trippers.go:580]     Audit-Id: 9eb9f2f2-b7bd-464f-899d-8bda643967b0
	I0603 05:46:41.597120   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.597686   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:42.099382   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:42.099382   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.099472   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.099472   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.103829   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:42.103829   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.103829   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.103829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.103829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.104565   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.104565   10844 round_trippers.go:580]     Audit-Id: 27d5169b-a1c3-4a70-856f-7332df0ca951
	I0603 05:46:42.104565   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.104883   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:42.105553   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:42.105704   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.105704   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.105704   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.110904   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:42.110904   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.110904   10844 round_trippers.go:580]     Audit-Id: 93bf5d72-e328-4c6b-837f-1add06a617ab
	I0603 05:46:42.110970   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.110970   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.110970   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.110997   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.110997   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.112656   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:42.603745   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:42.603851   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.603869   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.603869   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.607891   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:42.607891   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.607891   10844 round_trippers.go:580]     Audit-Id: 57969027-9bf0-4c88-a5bb-6b9927e3ad9f
	I0603 05:46:42.607891   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.607891   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.607891   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.608052   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.608052   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.608204   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:42.608535   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:42.608535   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.608535   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.608535   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.612139   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:42.612139   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.612139   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.612139   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.612139   10844 round_trippers.go:580]     Audit-Id: b87066dd-eabf-492f-a856-ff84c9ef9329
	I0603 05:46:42.612139   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.612887   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.612887   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.613247   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:43.090946   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:43.091014   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.091014   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.091014   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.099769   10844 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:46:43.099769   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.099769   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.100451   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Audit-Id: cb4a1f7c-3600-4af7-94f1-98584c83b695
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.100617   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:43.101375   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:43.101375   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.101375   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.101375   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.103496   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:43.103496   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Audit-Id: 86b00535-9c31-4fa5-a0f9-ca96ec3bee13
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.103496   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.103496   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.103496   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:43.591408   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:43.591408   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.591408   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.591408   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.597989   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:43.597989   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.598987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.598987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Audit-Id: bb27ff89-be44-4c16-ae45-edfd25f59647
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.599231   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:43.600304   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:43.600364   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.600364   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.600364   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.604685   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:43.604685   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.605191   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.605191   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Audit-Id: 8f43b6d8-2f96-4b28-bfa4-29d3d8df26cb
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.605598   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:43.606075   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:44.090714   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:44.090714   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.090714   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.090714   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.095323   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:44.095619   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.095687   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Audit-Id: 93cedeaf-e621-47fb-9c6a-d61ed7f01d25
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.095687   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.096591   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:44.097372   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:44.097372   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.097372   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.097457   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.100577   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:44.100577   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.100757   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Audit-Id: e7cac9d4-982e-41a5-b00d-d95928bb1b85
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.100757   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.101182   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:44.592561   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:44.592561   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.592561   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.592561   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.596199   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:44.596199   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.596199   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.597228   10844 round_trippers.go:580]     Audit-Id: 925fb9b5-5a63-4d0f-8a62-743341c857ba
	I0603 05:46:44.597228   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.597228   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.597276   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.597276   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.597451   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:44.597734   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:44.598321   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.598321   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.598321   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.603700   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:44.603700   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.603700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.603700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Audit-Id: 136878a4-6043-4cf0-9280-5cf09a8082da
	I0603 05:46:44.604624   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:45.097706   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:45.097706   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.097793   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.097793   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.101101   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:45.101101   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.101101   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.101101   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.101101   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.101188   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.101188   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.101188   10844 round_trippers.go:580]     Audit-Id: 2ed0fd87-87ef-466a-9aa3-9e0fb64882a3
	I0603 05:46:45.101394   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:45.102019   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:45.102019   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.102019   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.102019   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.104025   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:45.104397   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Audit-Id: cf7867dd-5cca-4769-8f05-37a786cd5cfb
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.104397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.104397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.104629   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:45.588316   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:45.588316   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.588316   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.588316   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.593103   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:45.593103   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.593103   10844 round_trippers.go:580]     Audit-Id: 4ec62c64-d0ab-4f25-8e9a-9822a1f0630d
	I0603 05:46:45.593182   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.593182   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.593182   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.593182   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.593182   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.594431   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:45.595140   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:45.595140   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.595140   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.595140   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.597734   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:45.598666   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.598666   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.598666   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Audit-Id: 60c8a907-d001-4ea2-8142-f9818c010b7d
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.599083   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:46.091840   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:46.091840   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.091840   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.091840   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.096844   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:46.096918   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.096918   10844 round_trippers.go:580]     Audit-Id: d212f569-b4bf-461c-969c-d96458abebfb
	I0603 05:46:46.096918   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.096918   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.096918   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.096986   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.097009   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.097039   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:46.097869   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:46.097869   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.097869   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.097869   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.101612   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:46.102148   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.102148   10844 round_trippers.go:580]     Audit-Id: b04b45ae-82a8-46a6-afeb-9ceb29b28fed
	I0603 05:46:46.102220   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.102220   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.102220   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.102220   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.102220   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.102678   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:46.102951   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:46.587064   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:46.587064   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.587064   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.587064   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.592645   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:46.592694   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.592694   10844 round_trippers.go:580]     Audit-Id: a1ca5e3d-d184-4927-bf4e-98611b3a6e81
	I0603 05:46:46.592780   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.592780   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.592780   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.592780   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.592780   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.592960   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:46.593157   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:46.593732   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.593732   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.593732   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.597026   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:46.597026   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.597026   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.597026   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.597324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.597324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.597324   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.597324   10844 round_trippers.go:580]     Audit-Id: 35c6f14f-c16c-435b-b6c7-1fdb570eb043
	I0603 05:46:46.597694   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:47.101543   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:47.101748   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.101748   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.101748   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.105736   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:47.105816   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Audit-Id: ca108275-6223-4bf9-a5f0-4cc84a54f4a6
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.105816   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.105816   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.106154   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:47.106590   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:47.106590   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.106590   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.106590   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.109186   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:47.109854   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.109854   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.109854   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Audit-Id: 4521aba3-9f74-44fe-b23f-721e15790843
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.110079   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:47.598036   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:47.598036   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.598124   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.598124   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.601455   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:47.601712   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.601712   10844 round_trippers.go:580]     Audit-Id: f9c684d6-a3cf-4d50-9e01-f47e721118ee
	I0603 05:46:47.601712   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.601712   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.601712   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.601712   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.601773   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.601908   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:47.602691   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:47.602691   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.602691   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.602691   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.605397   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:47.605397   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Audit-Id: a71c2d0e-e862-4994-be72-c02f866ee520
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.605397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.605397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.606161   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:48.096038   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:48.096038   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.096038   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.096038   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.100703   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:48.100896   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.100896   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.100896   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.100896   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.100896   10844 round_trippers.go:580]     Audit-Id: ae2e4b1c-82d1-4c35-ac60-c37e7224cd64
	I0603 05:46:48.100896   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.100974   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.100974   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:48.101975   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:48.102055   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.102055   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.102055   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.104288   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:48.105295   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Audit-Id: 322b0b0c-ea20-408e-a436-ecb60f637781
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.105341   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.105341   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.105758   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:48.106227   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:48.594461   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:48.594764   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.594764   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.594764   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.598657   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:48.599655   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.599655   10844 round_trippers.go:580]     Audit-Id: 720fb9c1-514b-4fa4-9a8f-05ce7c92329e
	I0603 05:46:48.599655   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.599655   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.599655   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.599655   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.599754   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.600156   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:48.600904   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:48.600904   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.600904   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.600904   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.603669   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:48.603669   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Audit-Id: 92eebe15-88e2-4ab5-90d9-831fedb9feda
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.603669   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.603669   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.604659   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:49.089944   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:49.089944   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.089944   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.089944   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.094814   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:49.095040   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Audit-Id: 7afcd4ad-a024-4cae-ae1d-35ac201565d9
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.095040   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.095040   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.095204   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:49.096542   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:49.096542   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.096542   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.096542   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.099424   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:49.099424   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.099424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.099424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Audit-Id: 9e37f3a9-2b34-4640-aafd-192c28452379
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.099928   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:49.588138   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:49.588138   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.588138   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.588219   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.593035   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:49.593035   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Audit-Id: a9062b19-bb98-47fd-ba54-46ce395c00a4
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.593035   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.593035   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.593035   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:49.594257   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:49.594257   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.594257   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.594329   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.597669   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:49.597669   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.597669   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.598147   10844 round_trippers.go:580]     Audit-Id: 7a792bcf-eb47-49e9-af10-de3d436655c0
	I0603 05:46:49.598147   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.598147   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.598147   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.598147   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.598247   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:50.090419   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:50.090481   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.090481   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.090481   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.095354   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:50.095470   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.095470   10844 round_trippers.go:580]     Audit-Id: 45443a5b-ef6b-4809-8f22-caeae74ece9c
	I0603 05:46:50.095470   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.095470   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.095543   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.095543   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.095543   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.095727   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:50.096702   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:50.096772   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.096772   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.096772   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.100114   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:50.100114   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Audit-Id: c79270e3-d329-4fb2-b2a2-94094173db8c
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.100114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.100114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.100114   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:50.589021   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:50.589021   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.589021   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.589021   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.592617   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:50.593195   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.593195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.593195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Audit-Id: afba2f2f-c402-4d71-b56a-b80a2f3717f7
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.593195   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:50.594187   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:50.594187   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.594264   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.594264   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.596495   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:50.596495   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.596495   10844 round_trippers.go:580]     Audit-Id: 9738cbfc-3e55-46e9-9b7c-4363e23525e6
	I0603 05:46:50.596495   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.596495   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.596495   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.596495   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.597315   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.598173   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:50.598173   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:51.088879   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:51.088879   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.088879   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.088879   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.092443   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:51.093343   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.093343   10844 round_trippers.go:580]     Audit-Id: f2b2380a-e67d-4700-b5e6-9172bde419f4
	I0603 05:46:51.093343   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.093343   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.093403   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.093403   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.093403   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.093403   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:51.094283   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:51.094283   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.094283   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.094283   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.099858   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:51.099858   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.099858   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.099858   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Audit-Id: 6a8112c4-9d14-4a94-b89e-dee65725a642
	I0603 05:46:51.099858   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:51.590439   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:51.590439   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.590439   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.590439   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.595216   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:51.595216   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.595216   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.595216   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.595216   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.595216   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.595571   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.595571   10844 round_trippers.go:580]     Audit-Id: d27e589a-1969-40f4-86aa-57de5ec2d3c4
	I0603 05:46:51.595950   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:51.596728   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:51.596728   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.596728   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.596728   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.600071   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:51.600071   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Audit-Id: 562e3c31-bdc4-4fbc-9263-0afe243cb053
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.600071   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.600071   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.600739   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:52.087802   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:52.087802   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.087872   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.087872   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.093564   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:52.093564   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.093656   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.093656   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Audit-Id: ad249f91-6b0b-447b-873f-c5a9fa7ae951
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.093863   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:52.094443   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:52.094443   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.094443   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.094443   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.098158   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:52.098158   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.098158   10844 round_trippers.go:580]     Audit-Id: 3c62881f-f33a-47c8-8c6d-96c853aa132e
	I0603 05:46:52.098230   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.098230   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.098230   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.098230   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.098230   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.098301   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:52.600282   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:52.600282   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.600369   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.600369   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.605074   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:52.605074   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.605074   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Audit-Id: ffd49097-2ea6-4cc7-8b1b-65a1c98feede
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.605074   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.605074   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:52.606227   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:52.606227   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.606227   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.606227   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.609393   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:52.609393   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Audit-Id: efb0837c-2971-4a51-89a7-44ca1ef1e9ab
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.609393   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.609393   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.609393   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:52.610134   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:53.101620   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:53.101681   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.101681   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.101681   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.108280   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:53.108280   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Audit-Id: 4c81f744-80fb-4a22-8695-9431833c3e42
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.108639   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.108639   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.109029   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:53.109791   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:53.109791   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.109920   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.109920   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.132408   10844 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0603 05:46:53.132408   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Audit-Id: 8da6ef5b-785e-423c-8c17-48d19ff52664
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.132482   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.132482   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.132814   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:53.587823   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:53.587823   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.587823   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.587823   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.592384   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:53.592445   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.592445   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.592445   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Audit-Id: eb713599-0b71-4e64-b070-1f158e15df3e
	I0603 05:46:53.592803   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:53.593085   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:53.593624   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.593624   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.593624   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.599215   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:53.599215   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Audit-Id: 2cefec6c-1577-4cf3-9c20-e04443c2b9ea
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.599215   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.599215   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.599930   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:54.087345   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:54.087522   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.087522   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.087522   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.091102   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:54.092086   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.092086   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.092086   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Audit-Id: 4d477120-e7aa-497a-913f-16a24bceb6e3
	I0603 05:46:54.092317   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:54.093171   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:54.093171   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.093171   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.093171   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.095742   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:54.095742   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.096176   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.096176   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.096249   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.096333   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.096540   10844 round_trippers.go:580]     Audit-Id: 170c493e-d4ac-45f6-8933-fd45c55eddfb
	I0603 05:46:54.096592   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.096870   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:54.601527   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:54.601527   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.601527   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.601527   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.605467   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:54.606324   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Audit-Id: e591b238-fbd1-4190-bcb2-931e7d4f16b7
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.606324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.606324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.607211   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:54.607933   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:54.607933   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.607933   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.607933   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.611522   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:54.611766   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Audit-Id: cd5fc7f8-c2a7-44af-bee1-1af246633fb9
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.611766   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.611850   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.612511   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:54.613367   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:55.099434   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:55.099434   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.099434   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.099434   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.103807   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:55.103807   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.104730   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.104730   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.104730   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.104730   10844 round_trippers.go:580]     Audit-Id: 6e5eebcd-6723-4f5b-b30e-d9fc65dbd2c4
	I0603 05:46:55.104830   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.104830   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.105035   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:55.105892   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:55.105892   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.105892   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.105892   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.110903   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:55.110903   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Audit-Id: 4964b26b-1723-4218-b263-8d2bbc28f2ab
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.110903   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.110903   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.111448   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:55.587077   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:55.587077   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.587187   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.587187   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.591314   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:55.591697   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.591697   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.591697   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Audit-Id: af6f64c4-9f55-4c16-a696-d2510ee5e6b1
	I0603 05:46:55.592139   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:55.592843   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:55.592922   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.592922   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.592922   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.598914   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:55.598914   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.598914   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Audit-Id: 30c925a1-0569-4d0f-a251-21408d1536a2
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.598914   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.598914   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:56.101735   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:56.101735   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.101735   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.101735   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.106313   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:56.106313   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.106313   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.106313   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.106313   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.106313   10844 round_trippers.go:580]     Audit-Id: 1eddcf0d-da5a-4445-b23e-650fbfc15ee1
	I0603 05:46:56.106429   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.106429   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.106604   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:56.107414   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:56.107485   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.107485   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.107485   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.109773   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:56.109773   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.109773   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.110549   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.110549   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.110549   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.110549   10844 round_trippers.go:580]     Audit-Id: 3e21a8f0-6f10-4585-ab57-330ad2b8d7b2
	I0603 05:46:56.110549   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.110753   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:56.599817   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:56.599817   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.599817   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.599817   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.602405   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:56.603246   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.603246   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.603246   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.603246   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.603328   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.603328   10844 round_trippers.go:580]     Audit-Id: d1a656fd-9164-44a5-9ceb-ad1cff9de083
	I0603 05:46:56.603328   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.603575   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:56.604097   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:56.604097   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.604097   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.604097   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.606670   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:56.606670   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Audit-Id: 540db8f7-b5ab-4875-885a-fe44442f05dd
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.606670   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.606670   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.607757   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:57.096624   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:57.096813   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.096813   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.096813   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.100612   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:57.101149   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.101149   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Audit-Id: fa35b893-902b-4b1b-81b9-30e9943ac660
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.101149   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.101407   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:57.101744   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:57.102273   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.102273   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.102273   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.106922   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:57.106922   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.106922   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.106922   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Audit-Id: 99f98ac1-298c-4b65-bacc-7bebdff9b954
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.107609   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:57.107804   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:57.597452   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:57.597452   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.597452   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.597452   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.602065   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:57.602065   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.602065   10844 round_trippers.go:580]     Audit-Id: 72c1de68-358d-4304-973b-863283f8f124
	I0603 05:46:57.602065   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.602065   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.602065   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.602498   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.602498   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.602994   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:57.603624   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:57.603624   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.603624   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.603624   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.607998   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:57.607998   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Audit-Id: 29143fcd-0c3a-40ab-b72c-95381e387c84
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.607998   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.607998   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.608873   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:58.098684   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:58.098684   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.098796   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.098796   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.102942   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:58.102942   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.102942   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Audit-Id: 62725f5d-d69b-4115-b212-37447c8a8e8a
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.102942   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.102942   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:58.104188   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:58.104246   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.104246   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.104246   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.107675   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:58.107675   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Audit-Id: f3ee7798-ddec-4f3d-8965-60e65f0954cf
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.107743   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.107743   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.108306   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:58.599203   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:58.599203   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.599203   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.599203   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.602801   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:58.603552   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.603552   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.603552   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Audit-Id: 98794095-a4eb-488f-a093-059538800e84
	I0603 05:46:58.603820   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:58.604626   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:58.604698   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.604698   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.604698   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.608080   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:58.608080   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Audit-Id: 621c6cad-b949-4304-908f-c983b9c26292
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.608080   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.608080   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.609250   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:59.099044   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:59.099044   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.099044   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.099044   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.102669   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:59.103392   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.103392   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.103392   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.103392   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.103392   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.103523   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.103523   10844 round_trippers.go:580]     Audit-Id: 44ee76ac-1b9b-4b69-bcca-065b6c082cac
	I0603 05:46:59.103699   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:59.104592   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:59.104695   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.104695   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.104695   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.108015   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:59.108097   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.108097   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Audit-Id: d421c825-8a83-4c80-b61b-02756b227db3
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.108097   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.108309   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:59.108840   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:59.598506   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:59.598506   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.598506   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.598506   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.602203   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:59.602203   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.602203   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.603194   10844 round_trippers.go:580]     Audit-Id: b95d7f54-ed6f-4a2f-a7ab-4dda251bba59
	I0603 05:46:59.603194   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.603221   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.603221   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.603221   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.603221   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:59.604357   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:59.604357   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.604412   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.604412   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.606804   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:59.606804   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Audit-Id: 427c2bf3-a0bc-47ca-88a9-6cfe21e8d39d
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.607417   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.607417   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.607844   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:00.099100   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:00.099100   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.099213   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.099213   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.102619   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:00.103465   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.103465   10844 round_trippers.go:580]     Audit-Id: 0aad60ad-7839-4aa8-9d75-04d7bf98312e
	I0603 05:47:00.103526   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.103526   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.103526   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.103526   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.103526   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.103778   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:00.104382   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:00.104382   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.104382   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.104382   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.106968   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:00.106968   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Audit-Id: 228407a3-9ca4-4994-8f3c-b392b9e4da13
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.106968   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.106968   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.107439   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:00.600209   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:00.600371   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.600371   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.600451   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.604158   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:00.604917   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.604917   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.604917   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Audit-Id: 06c5f899-bddb-485c-beab-8da0a71f44f6
	I0603 05:47:00.605162   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:00.606474   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:00.606474   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.606474   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.606474   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.609106   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:00.609106   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.609106   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.609106   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.609106   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.609106   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.609964   10844 round_trippers.go:580]     Audit-Id: a2b91db9-b0d3-4352-87be-9bf0280a67f3
	I0603 05:47:00.609964   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.610936   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:01.100125   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:01.100270   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.100270   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.100270   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.104057   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:01.104057   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.104057   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.104057   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.104661   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.104661   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.104661   10844 round_trippers.go:580]     Audit-Id: 6a316f66-9036-48b1-8557-9c19c33f22fb
	I0603 05:47:01.104661   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.104970   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:01.105826   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:01.105826   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.105826   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.105826   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.112171   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:01.112171   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Audit-Id: e212d988-fdbb-470f-9ec8-64d75e89b25b
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.112171   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.112171   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.112171   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:01.112943   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:47:01.600025   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:01.600025   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.600025   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.600025   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.604850   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:01.605410   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.605410   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.605410   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Audit-Id: f31112ce-8e1a-4169-8d15-bfcf31e0fc72
	I0603 05:47:01.605674   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:01.605830   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:01.605830   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.605830   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.605830   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.611853   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:01.611897   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Audit-Id: d04f0410-9684-4563-9c35-648067c75858
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.611897   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.611897   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.612633   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:02.097580   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:02.097702   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.097702   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.097702   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.102187   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:02.102187   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.102187   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.102187   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Audit-Id: 78f0850a-8e27-47e8-be59-58df6cc90b09
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.102741   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:02.103579   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:02.103596   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.103596   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.103596   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.106578   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:02.106694   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.106694   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.106694   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Audit-Id: ba15f9b3-3415-4a1d-b975-59100a12178a
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.107029   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:02.597164   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:02.597164   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.597164   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.597164   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.602382   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:02.602382   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.602382   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.602382   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Audit-Id: b9ff1126-9189-4e4c-aa9f-2ef453ed71ba
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.602382   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:02.603102   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:02.603102   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.603102   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.603102   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.606978   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:02.607114   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Audit-Id: a0fdd326-e683-4a52-8b1a-91948eb6e25d
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.607114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.607114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.607557   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:03.097400   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:03.097400   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.097494   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.097494   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.103819   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:03.103902   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.103929   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.103929   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.103929   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.103929   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.103929   10844 round_trippers.go:580]     Audit-Id: 47039cf0-45f6-4c6f-bee3-0f0890a4fb11
	I0603 05:47:03.103962   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.104038   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:03.104897   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:03.104897   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.104897   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.104897   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.108128   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:03.108128   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Audit-Id: 0cfdc209-2a80-497f-8551-86538ed0a330
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.108128   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.108128   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.108128   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:03.596353   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:03.596353   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.596353   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.596353   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.601075   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:03.601292   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.601292   10844 round_trippers.go:580]     Audit-Id: c95e241b-37bc-4fc7-b34c-62ffad918fa1
	I0603 05:47:03.601292   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.601292   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.601292   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.601391   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.601391   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.601622   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:03.602431   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:03.602431   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.602431   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.602431   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.606000   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:03.606000   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.606000   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.606000   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.606000   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.606000   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.606000   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.606338   10844 round_trippers.go:580]     Audit-Id: 4d11eb09-22be-461d-9b15-50f217bf7945
	I0603 05:47:03.606661   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:03.607236   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:47:04.096662   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:04.096766   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.096766   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.096766   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.101198   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:04.101198   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Audit-Id: 157d21e3-5922-4f4b-bcf5-86d614ae3629
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.101198   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.101198   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.101500   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:04.102250   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:04.102250   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.102250   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.102322   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.104160   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:47:04.104160   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.105157   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Audit-Id: 33cb0ac4-bc1d-4086-ae7c-d8202de61269
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.105178   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.105327   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:04.599511   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:04.599642   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.599642   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.599642   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.604451   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:04.604451   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.604451   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Audit-Id: d89f6690-271a-4ac5-8712-1ae5c1866e66
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.604451   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.604451   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:04.605615   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:04.605615   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.605678   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.605678   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.608044   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:04.608992   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.608992   10844 round_trippers.go:580]     Audit-Id: 70fbb42b-4171-482e-ad67-67bea4a635ec
	I0603 05:47:04.608992   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.608992   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.609043   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.609043   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.609043   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.609317   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:05.088714   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:05.088714   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.088714   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.088714   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.093322   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:05.093322   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.093322   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.093322   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.093499   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.093499   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.093499   10844 round_trippers.go:580]     Audit-Id: cde01b6d-0720-4209-aef4-38850b17c982
	I0603 05:47:05.093529   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.094317   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:05.095109   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:05.095180   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.095180   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.095180   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.098213   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:05.098213   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.098213   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.098213   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Audit-Id: 2954bc53-337a-441e-baec-25fcc96db60d
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.098572   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:05.598196   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:05.598273   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.598273   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.598399   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.601694   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:05.602194   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.602194   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.602194   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Audit-Id: bc2cfc8e-465a-4e60-a34c-33bba9966948
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.602451   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:05.603232   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:05.603232   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.603232   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.603346   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.605740   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:05.605740   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.605740   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.606037   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Audit-Id: a771c840-d791-41e5-8aef-3d3555e3bab2
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.606331   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:06.091494   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:06.091587   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.091587   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.091634   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.095559   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:06.095635   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.095635   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.095635   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.095635   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.095700   10844 round_trippers.go:580]     Audit-Id: 1789ef2c-8f11-4086-b0a3-c03447bdbad5
	I0603 05:47:06.095700   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.095700   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.095700   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:06.097220   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:06.097220   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.097220   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.097220   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.100155   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:06.100317   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.100317   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.100317   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.100317   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.100317   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.100317   10844 round_trippers.go:580]     Audit-Id: 9370556c-d96e-493a-974b-51d6304bd102
	I0603 05:47:06.100400   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.100813   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:06.101346   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:47:06.599417   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:06.599417   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.599417   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.599417   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.603964   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:06.604447   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.604447   10844 round_trippers.go:580]     Audit-Id: 5eeafd59-3dd5-46bc-a4d0-2c92bb30dda2
	I0603 05:47:06.604447   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.604447   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.604447   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.604517   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.604517   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.605434   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:06.606418   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:06.606418   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.606418   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.606418   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.609663   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:06.609719   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.609719   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.609719   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Audit-Id: 2b554633-7673-49f5-a72d-ddb67aed1c31
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.609719   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.102159   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:07.102159   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.102486   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.102486   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.108141   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:07.108245   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.108245   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.108245   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Audit-Id: b45243ce-e442-4a87-91c3-27b98cedf22d
	I0603 05:47:07.108535   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0603 05:47:07.109278   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.109350   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.109350   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.109350   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.113677   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:07.113970   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.113970   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.113970   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Audit-Id: f3764f23-4356-448a-809e-46d35400c2cd
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.114279   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.114807   10844 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.114807   10844 pod_ready.go:81] duration metric: took 25.528442s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.114807   10844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.114898   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:47:07.114976   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.114976   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.114976   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.120765   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:07.120765   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.120765   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Audit-Id: 3fe523be-d456-4a71-8e04-aa0a7a390cb7
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.120765   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.121397   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1822","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0603 05:47:07.122030   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.122138   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.122168   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.122168   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.124801   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.124801   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.124801   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.124801   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Audit-Id: 26ba75ac-3bf0-47a0-8973-5b6d7b97958f
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.125478   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.125872   10844 pod_ready.go:92] pod "etcd-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.125930   10844 pod_ready.go:81] duration metric: took 11.1227ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.125982   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.126105   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:47:07.126136   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.126136   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.126136   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.129386   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.129473   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.129473   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.129473   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Audit-Id: e94fc1be-cee3-47c8-a784-dfe73aed0dea
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.129473   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"1c07a75f-fb00-4529-a699-378974ce494b","resourceVersion":"1830","creationTimestamp":"2024-06-03T12:46:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.95.88:8443","kubernetes.io/config.hash":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.mirror":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.seen":"2024-06-03T12:45:54.794125775Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0603 05:47:07.130310   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.130381   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.130381   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.130381   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.132679   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.132679   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.132679   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.133083   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.133083   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.133137   10844 round_trippers.go:580]     Audit-Id: 8619a4b7-5646-4c6e-9273-ebcaabb3d40e
	I0603 05:47:07.133137   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.133137   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.133137   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.133137   10844 pod_ready.go:92] pod "kube-apiserver-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.133137   10844 pod_ready.go:81] duration metric: took 7.1551ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.133721   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.133766   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:47:07.133766   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.133877   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.133877   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.140103   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:07.140103   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.140103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.140103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Audit-Id: 159d5dde-1723-42d0-afff-9039ea610a9e
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.140640   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"1827","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0603 05:47:07.140843   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.140843   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.140843   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.140843   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.142979   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.142979   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Audit-Id: a86b9720-4652-462c-b6ed-be6ab14218ff
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.142979   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.142979   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.143942   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.143942   10844 pod_ready.go:92] pod "kube-controller-manager-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.143942   10844 pod_ready.go:81] duration metric: took 10.2215ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.143942   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.143942   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:47:07.143942   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.143942   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.143942   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.147003   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.147003   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.147003   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.147003   10844 round_trippers.go:580]     Audit-Id: 86000150-4726-4e8e-890d-d83b7449c0e3
	I0603 05:47:07.147003   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.148042   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.148042   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.148042   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.148335   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dl97g","generateName":"kube-proxy-","namespace":"kube-system","uid":"78665ab7-c6dd-4381-8b29-75df4d31eff1","resourceVersion":"1713","creationTimestamp":"2024-06-03T12:30:58Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:30:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0603 05:47:07.148413   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:47:07.148413   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.148999   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.148999   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.151431   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.151431   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Audit-Id: 52d42757-7111-4838-908c-dfd00087f27c
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.151431   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.151431   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.151431   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m03","uid":"39dbcb4e-fdeb-4463-8bde-9cfa6cead308","resourceVersion":"1870","creationTimestamp":"2024-06-03T12:41:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_41_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:41:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0603 05:47:07.151431   10844 pod_ready.go:97] node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:47:07.151431   10844 pod_ready.go:81] duration metric: took 7.4891ms for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	E0603 05:47:07.151431   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:47:07.151431   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.304519   10844 request.go:629] Waited for 152.865ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:47:07.304766   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:47:07.304766   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.304766   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.304766   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.311533   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:07.311533   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.311533   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Audit-Id: bd96bae8-2fe9-4fb9-b5a4-cde2f9b34461
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.311533   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.311533   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"1752","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0603 05:47:07.507286   10844 request.go:629] Waited for 194.4376ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.507375   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.507375   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.507375   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.507375   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.511274   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.511274   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.511274   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.511274   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.511934   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.511934   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.511934   10844 round_trippers.go:580]     Audit-Id: a10ca4f9-3fb3-40b8-9ca5-ddcd20ac08e7
	I0603 05:47:07.511984   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.512249   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.512622   10844 pod_ready.go:92] pod "kube-proxy-ks64x" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.512622   10844 pod_ready.go:81] duration metric: took 361.1893ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.512622   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.710123   10844 request.go:629] Waited for 197.2536ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:47:07.710199   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:47:07.710199   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.710199   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.710199   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.713992   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.713992   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.713992   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.713992   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.713992   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.713992   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.713992   10844 round_trippers.go:580]     Audit-Id: 7711a4e9-cb4d-47b3-a381-a33dbc407eb2
	I0603 05:47:07.714916   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.715186   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"1913","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0603 05:47:07.912958   10844 request.go:629] Waited for 196.7258ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:47:07.913242   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:47:07.913242   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.913306   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.913306   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.916688   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.916688   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.916688   10844 round_trippers.go:580]     Audit-Id: d305dd8b-b2e2-4410-b6b8-847a151efc81
	I0603 05:47:07.917072   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.917072   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.917072   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.917072   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.917072   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.918033   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"1918","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4582 chars]
	I0603 05:47:07.918033   10844 pod_ready.go:97] node "multinode-316400-m02" hosting pod "kube-proxy-z26hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m02" has status "Ready":"Unknown"
	I0603 05:47:07.918033   10844 pod_ready.go:81] duration metric: took 405.4099ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	E0603 05:47:07.918033   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m02" hosting pod "kube-proxy-z26hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m02" has status "Ready":"Unknown"
	I0603 05:47:07.918652   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:08.115342   10844 request.go:629] Waited for 196.4696ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:47:08.115342   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:47:08.115342   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:08.115342   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:08.115342   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:08.119192   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:08.119192   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:08.119192   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:08.119192   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:08 GMT
	I0603 05:47:08.119192   10844 round_trippers.go:580]     Audit-Id: 2f941bfa-9707-40b0-8241-6cb30bab08f1
	I0603 05:47:08.119192   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:08.119729   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:08.119729   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:08.119729   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"1854","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0603 05:47:08.303029   10844 request.go:629] Waited for 182.153ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:08.303135   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:08.303355   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:08.303355   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:08.303355   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:08.308062   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:08.308062   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:08.308062   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:08.308062   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:08.308062   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:08.308162   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:08 GMT
	I0603 05:47:08.308162   10844 round_trippers.go:580]     Audit-Id: ba997dd1-1d76-4bbc-af0c-e5f7b50b67d2
	I0603 05:47:08.308162   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:08.308758   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:08.309566   10844 pod_ready.go:92] pod "kube-scheduler-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:08.309566   10844 pod_ready.go:81] duration metric: took 390.9119ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:08.309566   10844 pod_ready.go:38] duration metric: took 26.7377403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:47:08.309566   10844 api_server.go:52] waiting for apiserver process to appear ...
	I0603 05:47:08.319426   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 05:47:08.343243   10844 command_runner.go:130] > a9b10f4d479a
	I0603 05:47:08.343658   10844 logs.go:276] 1 containers: [a9b10f4d479a]
	I0603 05:47:08.352813   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 05:47:08.377442   10844 command_runner.go:130] > ef3c01484867
	I0603 05:47:08.377442   10844 logs.go:276] 1 containers: [ef3c01484867]
	I0603 05:47:08.387382   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 05:47:08.415325   10844 command_runner.go:130] > 4241e2ff2dfe
	I0603 05:47:08.415432   10844 command_runner.go:130] > 8280b3904678
	I0603 05:47:08.415456   10844 logs.go:276] 2 containers: [4241e2ff2dfe 8280b3904678]
	I0603 05:47:08.424932   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 05:47:08.448074   10844 command_runner.go:130] > 334bb0174b55
	I0603 05:47:08.448926   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:47:08.448926   10844 logs.go:276] 2 containers: [334bb0174b55 f39be6db7a1f]
	I0603 05:47:08.459567   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 05:47:08.484922   10844 command_runner.go:130] > 09616a16042d
	I0603 05:47:08.485166   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:47:08.485166   10844 logs.go:276] 2 containers: [09616a16042d ad08c7b8f3af]
	I0603 05:47:08.494224   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 05:47:08.524572   10844 command_runner.go:130] > cbaa09a85a64
	I0603 05:47:08.524572   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:47:08.524572   10844 logs.go:276] 2 containers: [cbaa09a85a64 3d7dc29a5791]
	I0603 05:47:08.534541   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 05:47:08.562029   10844 command_runner.go:130] > 3a08a76e2a79
	I0603 05:47:08.562029   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:47:08.563010   10844 logs.go:276] 2 containers: [3a08a76e2a79 a00a9dc2a937]
	I0603 05:47:08.563010   10844 logs.go:123] Gathering logs for kube-scheduler [f39be6db7a1f] ...
	I0603 05:47:08.563010   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f39be6db7a1f"
	I0603 05:47:08.596027   10844 command_runner.go:130] ! I0603 12:22:59.604855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:08.596204   10844 command_runner.go:130] ! W0603 12:23:00.885974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:08.596266   10844 command_runner.go:130] ! W0603 12:23:00.886217       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.596266   10844 command_runner.go:130] ! W0603 12:23:00.886249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:08.596370   10844 command_runner.go:130] ! W0603 12:23:00.886344       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:08.596370   10844 command_runner.go:130] ! I0603 12:23:00.957357       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:08.596370   10844 command_runner.go:130] ! I0603 12:23:00.957471       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.596370   10844 command_runner.go:130] ! I0603 12:23:00.962196       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:08.596449   10844 command_runner.go:130] ! I0603 12:23:00.962492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:08.596449   10844 command_runner.go:130] ! I0603 12:23:00.962588       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:08.596449   10844 command_runner.go:130] ! I0603 12:23:00.962719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:08.596505   10844 command_runner.go:130] ! W0603 12:23:00.975786       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.596578   10844 command_runner.go:130] ! E0603 12:23:00.976030       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.596601   10844 command_runner.go:130] ! W0603 12:23:00.976627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596669   10844 command_runner.go:130] ! E0603 12:23:00.976720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596693   10844 command_runner.go:130] ! W0603 12:23:00.977093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.596693   10844 command_runner.go:130] ! E0603 12:23:00.977211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.596766   10844 command_runner.go:130] ! W0603 12:23:00.977871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596828   10844 command_runner.go:130] ! E0603 12:23:00.978108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596852   10844 command_runner.go:130] ! W0603 12:23:00.978352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.596922   10844 command_runner.go:130] ! E0603 12:23:00.978554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.596922   10844 command_runner.go:130] ! W0603 12:23:00.978915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597021   10844 command_runner.go:130] ! E0603 12:23:00.979166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597076   10844 command_runner.go:130] ! W0603 12:23:00.979907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597076   10844 command_runner.go:130] ! E0603 12:23:00.980156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597169   10844 command_runner.go:130] ! W0603 12:23:00.980358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597247   10844 command_runner.go:130] ! E0603 12:23:00.980393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597247   10844 command_runner.go:130] ! W0603 12:23:00.980479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597301   10844 command_runner.go:130] ! E0603 12:23:00.980561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597379   10844 command_runner.go:130] ! W0603 12:23:00.980991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597379   10844 command_runner.go:130] ! E0603 12:23:00.981244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597444   10844 command_runner.go:130] ! W0603 12:23:00.981380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.597473   10844 command_runner.go:130] ! E0603 12:23:00.981529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.597561   10844 command_runner.go:130] ! W0603 12:23:00.981800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.981883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:00.981956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.982200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:00.982090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.982650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:00.982102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.982927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.795531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.795655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.838399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.838478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.861969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.862351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598116   10844 command_runner.go:130] ! E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 05:47:08.609144   10844 logs.go:123] Gathering logs for kindnet [3a08a76e2a79] ...
	I0603 05:47:08.609144   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a08a76e2a79"
	I0603 05:47:08.638188   10844 command_runner.go:130] ! I0603 12:46:03.050827       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051229       1 main.go:107] hostIP = 172.17.95.88
	I0603 05:47:08.638248   10844 command_runner.go:130] ! podIP = 172.17.95.88
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051377       1 main.go:116] setting mtu 1500 for CNI 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051397       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051417       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.483366       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.505262       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.505362       1 main.go:227] handling current node
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506144       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.94.201 Flags: [] Table: 0} 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506651       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506661       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506765       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512187       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512270       1 main.go:227] handling current node
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512283       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512290       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512906       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512944       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529047       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529290       1 main.go:227] handling current node
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529365       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529466       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529947       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:46:53.530023       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:47:03.545370       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:47:03.545467       1 main.go:227] handling current node
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:47:03.545481       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638929   10844 command_runner.go:130] ! I0603 12:47:03.545487       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.639065   10844 command_runner.go:130] ! I0603 12:47:03.545994       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.639065   10844 command_runner.go:130] ! I0603 12:47:03.546064       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.642263   10844 logs.go:123] Gathering logs for kubelet ...
	I0603 05:47:08.642263   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825136    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825207    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.826137    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: E0603 12:45:50.827240    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552269    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552416    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552941    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: E0603 12:45:51.553003    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711442    1519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711544    1519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711817    1519 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.716147    1519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.748912    1519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.771826    1519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.772049    1519 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773407    1519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773557    1519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-316400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774457    1519 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774557    1519 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.775200    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778084    1519 kubelet.go:400] "Attempting to sync node with API server"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778299    1519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778455    1519 kubelet.go:312] "Adding apiserver pod source"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.782054    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.782432    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.785611    1519 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.790640    1519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.793090    1519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.794605    1519 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.796156    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.796271    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.797002    1519 server.go:1264] "Started kubelet"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.798266    1519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.801861    1519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.802334    1519 server.go:455] "Adding debug handlers to kubelet server"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.803283    1519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.803500    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.818343    1519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.844408    1519 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.846586    1519 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859495    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="200ms"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.859675    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859801    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860191    1519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860329    1519 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860344    1519 factory.go:221] Registration of the systemd container factory successfully
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898244    1519 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898480    1519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898596    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899321    1519 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899417    1519 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899447    1519 policy_none.go:49] "None policy: Start"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.900544    1519 reconciler.go:26] "Reconciler: start to sync state"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907485    1519 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907527    1519 state_mem.go:35] "Initializing new in-memory state store"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.908237    1519 state_mem.go:75] "Updated machine memory state"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.913835    1519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914035    1519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914854    1519 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.921784    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.927630    1519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-316400\" not found"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932254    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932281    1519 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932300    1519 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.935092    1519 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.940949    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.941116    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.948643    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.949875    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.957193    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.035350    1519 topology_manager.go:215] "Topology Admit Handler" podUID="29e4294fa112526de08d5737962f6330" podNamespace="kube-system" podName="kube-apiserver-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.036439    1519 topology_manager.go:215] "Topology Admit Handler" podUID="53c1415900cfae2b2544e26360f8c9e2" podNamespace="kube-system" podName="kube-controller-manager-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.037279    1519 topology_manager.go:215] "Topology Admit Handler" podUID="392dbbcc275890dd2b6fadbfc5aaee27" podNamespace="kube-system" podName="kube-scheduler-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.040156    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a77247d80dfdd462b8863b85ab8ad4bb" podNamespace="kube-system" podName="etcd-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041355    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf22fe66615444841b76ea00858c2d191b3808baedd9bc080bc40a07e173120c"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041413    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b8b906c7ece4b6d777a07a0cb2203eff03efdfae414479586ee928dfd93a0f"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041426    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8fbb688dfe331c1f384bb60f2e3169f09a613ebbfb33a15f502f1d3e605b1"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041486    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f0d5d979f878809d344310dbe1eff0bad9db5a6522da02c87fecce5e5aeee0"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.047918    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.063032    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="400ms"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.063221    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24225992b633386b5c5d178b106212b6c942a19a6f436ce076aaa359c121477"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.079235    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.093321    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.105962    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106038    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-certs\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106081    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-ca-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106112    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-ca-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106140    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-k8s-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106216    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392dbbcc275890dd2b6fadbfc5aaee27-kubeconfig\") pod \"kube-scheduler-multinode-316400\" (UID: \"392dbbcc275890dd2b6fadbfc5aaee27\") " pod="kube-system/kube-scheduler-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106252    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-data\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106274    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-k8s-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106301    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106335    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-flexvolume-dir\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-kubeconfig\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.108700    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f366fa802e02ad1c75f843781b4cf6b39c2e71e08ec4fb65114ebe9cbf4901"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.152637    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.154286    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.473402    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="800ms"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.556260    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.558340    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.691400    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.691528    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.943127    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.943173    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.142169    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.150065    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.247409    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.247587    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.250356    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.250413    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.274392    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="1.6s"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.360120    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.361915    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.861220    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:57 multinode-316400 kubelet[1519]: I0603 12:45:57.964214    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604617    1519 kubelet_node_status.go:112] "Node was previously registered" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604775    1519 kubelet_node_status.go:76] "Successfully registered node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.606910    1519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.607771    1519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.608805    1519 setters.go:580] "Node became not ready" node="multinode-316400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T12:46:00Z","lastTransitionTime":"2024-06-03T12:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.691329    1519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-316400\" already exists" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.791033    1519 apiserver.go:52] "Watching apiserver"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798319    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a3523f27-9775-4c1f-812f-a667faa1bace" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4hrc6"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798930    1519 topology_manager.go:215] "Topology Admit Handler" podUID="6815ff24-537b-42f3-b8ee-4c3e13be89f7" podNamespace="kube-system" podName="kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800209    1519 topology_manager.go:215] "Topology Admit Handler" podUID="60c8f253-7e07-4f56-b1f2-e0032ac6a8ce" podNamespace="kube-system" podName="kube-proxy-ks64x"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800471    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa" podNamespace="kube-system" podName="storage-provisioner"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800813    1519 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.801153    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.801692    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-316400" podUID="5a3b396d-1240-4c67-b2f5-e5664e068bfe"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.802378    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.833818    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-316400"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.848055    1519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.920366    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-cni-cfg\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923685    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-lib-modules\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923879    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-xtables-lock\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924084    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-xtables-lock\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924331    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbd73e44-9a7e-4b5f-93e5-d1621c837baa-tmp\") pod \"storage-provisioner\" (UID: \"bbd73e44-9a7e-4b5f-93e5-d1621c837baa\") " pod="kube-system/storage-provisioner"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924536    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-lib-modules\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.924884    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.925133    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.425053064 +0000 UTC m=+6.818668510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.947864    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171c5f025e4267e9949ddac2f1863980" path="/var/lib/kubelet/pods/171c5f025e4267e9949ddac2f1863980/volumes"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.949521    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79ce6c8ebbce53597babbe73b1962c9" path="/var/lib/kubelet/pods/b79ce6c8ebbce53597babbe73b1962c9/volumes"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.959965    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960012    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960141    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.460099085 +0000 UTC m=+6.853714631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.984966    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-316400" podStartSLOduration=0.984946212 podStartE2EDuration="984.946212ms" podCreationTimestamp="2024-06-03 12:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:00.911653941 +0000 UTC m=+6.305269487" watchObservedRunningTime="2024-06-03 12:46:00.984946212 +0000 UTC m=+6.378561658"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430112    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430199    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.430180493 +0000 UTC m=+7.823795939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532174    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532233    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532300    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.532282929 +0000 UTC m=+7.925898375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: I0603 12:46:01.863329    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.165874    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.352473    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.353470    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-316400" podUID="0cdcee20-9dca-4eca-b92f-a7214368dd5e"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.376913    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442116    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442214    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.442196268 +0000 UTC m=+9.835811814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543119    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543210    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543279    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.543260694 +0000 UTC m=+9.936876140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935003    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935334    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:03 multinode-316400 kubelet[1519]: I0603 12:46:03.466467    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-316400" podStartSLOduration=1.4664454550000001 podStartE2EDuration="1.466445455s" podCreationTimestamp="2024-06-03 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:03.412988665 +0000 UTC m=+8.806604211" watchObservedRunningTime="2024-06-03 12:46:03.466445455 +0000 UTC m=+8.860061001"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461035    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461144    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.461126571 +0000 UTC m=+13.854742017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562140    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562216    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562368    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.562318298 +0000 UTC m=+13.955933744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.917749    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935276    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935939    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935856    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497589    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497705    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.497687292 +0000 UTC m=+21.891302738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599269    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599402    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599472    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.599454365 +0000 UTC m=+21.993069911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933000    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933553    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:09 multinode-316400 kubelet[1519]: E0603 12:46:09.919522    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.933394    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.934072    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.933530    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.934829    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.920634    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.933278    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.934086    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.577469    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.578411    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.578339881 +0000 UTC m=+37.971955427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.677992    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678127    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678205    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.678184952 +0000 UTC m=+38.071800498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933065    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933791    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.934362    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.935128    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:19 multinode-316400 kubelet[1519]: E0603 12:46:19.922666    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.934372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.935099    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934047    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934767    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.924197    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.933388    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.934120    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.934350    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.935369    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.934504    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.935634    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:29 multinode-316400 kubelet[1519]: E0603 12:46:29.925755    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.933950    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.937812    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624555    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624639    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.624619316 +0000 UTC m=+70.018234762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726444    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726516    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726576    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.726559662 +0000 UTC m=+70.120175108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.680830   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.933519    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.934365    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.841289    1519 scope.go:117] "RemoveContainer" containerID="f3d3a474bbe63a5e0e163d5c7d92c13e3e09cac96cc090c7077e648e1f08c5c7"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.842261    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: E0603 12:46:33.842518    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbd73e44-9a7e-4b5f-93e5-d1621c837baa)\"" pod="kube-system/storage-provisioner" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	I0603 05:47:08.728776   10844 logs.go:123] Gathering logs for kube-apiserver [a9b10f4d479a] ...
	I0603 05:47:08.728776   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9b10f4d479a"
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:57.403757       1 options.go:221] external host was not specified, using 172.17.95.88
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:57.406924       1 server.go:148] Version: v1.30.1
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:57.407254       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.053920       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.058845       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.058955       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.059338       1 instance.go:299] Using reconciler: lease
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.060201       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.875148       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:58.875563       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.142148       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.142832       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.377455       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.573170       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.586634       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.586771       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.586784       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.588425       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.588531       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.590497       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.591820       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.591914       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.591924       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.594253       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.594382       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.595963       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.596105       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.596117       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.597347       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.597459       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.597610       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.598635       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.601013       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601125       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601136       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! I0603 12:45:59.601685       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601835       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601851       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! I0603 12:45:59.602906       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.603027       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 05:47:08.766656   10844 command_runner.go:130] ! I0603 12:45:59.605451       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 05:47:08.766768   10844 command_runner.go:130] ! W0603 12:45:59.605590       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766830   10844 command_runner.go:130] ! W0603 12:45:59.605603       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766830   10844 command_runner.go:130] ! I0603 12:45:59.606823       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 05:47:08.766830   10844 command_runner.go:130] ! W0603 12:45:59.607057       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766830   10844 command_runner.go:130] ! W0603 12:45:59.607073       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766896   10844 command_runner.go:130] ! I0603 12:45:59.610997       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 05:47:08.766920   10844 command_runner.go:130] ! W0603 12:45:59.611141       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766949   10844 command_runner.go:130] ! W0603 12:45:59.611153       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766949   10844 command_runner.go:130] ! I0603 12:45:59.615262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 05:47:08.766949   10844 command_runner.go:130] ! I0603 12:45:59.618444       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 05:47:08.766987   10844 command_runner.go:130] ! W0603 12:45:59.618592       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 05:47:08.766987   10844 command_runner.go:130] ! W0603 12:45:59.618802       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766987   10844 command_runner.go:130] ! I0603 12:45:59.633959       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 05:47:08.767055   10844 command_runner.go:130] ! W0603 12:45:59.634179       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.634387       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:45:59.641016       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.641203       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.641390       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:45:59.643262       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.643611       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:45:59.665282       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.665339       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321072       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321338       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321510       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.322441       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.324839       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.324963       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325383       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331772       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331819       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331950       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331975       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331996       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332390       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332464       1 controller.go:139] Starting OpenAPI controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332488       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332501       1 naming_controller.go:291] Starting NamingConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332512       1 establishing_controller.go:76] Starting EstablishingController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332528       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332550       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321340       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325911       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.348350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.348672       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325922       1 available_controller.go:423] Starting AvailableConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.350192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325939       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325949       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.368845       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.368878       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.451943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 05:47:08.767861   10844 command_runner.go:130] ! I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 05:47:08.767861   10844 command_runner.go:130] ! I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 05:47:08.767970   10844 command_runner.go:130] ! I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:08.767970   10844 command_runner.go:130] ! I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 05:47:08.768032   10844 command_runner.go:130] ! I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 05:47:08.768050   10844 command_runner.go:130] ! I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 05:47:08.768050   10844 command_runner.go:130] ! I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 05:47:08.768050   10844 command_runner.go:130] ! W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 05:47:08.768105   10844 command_runner.go:130] ! I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 05:47:08.768128   10844 command_runner.go:130] ! I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 05:47:08.768170   10844 command_runner.go:130] ! I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 05:47:08.768170   10844 command_runner.go:130] ! I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 05:47:08.768195   10844 command_runner.go:130] ! I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 05:47:08.768195   10844 command_runner.go:130] ! I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 05:47:08.768195   10844 command_runner.go:130] ! I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 05:47:08.768195   10844 command_runner.go:130] ! W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	I0603 05:47:08.776793   10844 logs.go:123] Gathering logs for etcd [ef3c01484867] ...
	I0603 05:47:08.776793   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3c01484867"
	I0603 05:47:08.805375   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.861568Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:08.805729   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.863054Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.95.88:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.95.88:2380","--initial-cluster=multinode-316400=https://172.17.95.88:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.95.88:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.95.88:2380","--name=multinode-316400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 05:47:08.805729   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.86357Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 05:47:08.805832   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.864546Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:08.805832   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.866457Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.95.88:2380"]}
	I0603 05:47:08.805894   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.867148Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:08.805921   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.884169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"]}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.885995Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-316400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.912835Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"25.475134ms"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.947133Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.990656Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","commit-index":1995}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=()"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became follower at term 2"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2227694153984668 [peers: [], term: 2, commit: 1995, applied: 0, lastindex: 1995, lastterm: 2]"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:57.005826Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.01104Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.018364Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.030883Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042399Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2227694153984668","timeout":"7s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042946Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2227694153984668"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.043072Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2227694153984668","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.046821Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047797Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	I0603 05:47:08.820044   10844 logs.go:123] Gathering logs for kube-controller-manager [3d7dc29a5791] ...
	I0603 05:47:08.820044   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d7dc29a5791"
	I0603 05:47:08.859036   10844 command_runner.go:130] ! I0603 12:22:58.709734       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:08.859101   10844 command_runner.go:130] ! I0603 12:22:59.476409       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:08.859101   10844 command_runner.go:130] ! I0603 12:22:59.477144       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.859101   10844 command_runner.go:130] ! I0603 12:22:59.479107       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:22:59.479482       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:22:59.480446       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:22:59.480646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:23:03.879622       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:23:03.880293       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:23:03.880027       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.898013       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.898158       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.898213       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.919140       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:08.859365   10844 command_runner.go:130] ! I0603 12:23:03.919340       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:08.859365   10844 command_runner.go:130] ! I0603 12:23:03.919371       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:08.859389   10844 command_runner.go:130] ! I0603 12:23:03.929290       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:08.859417   10844 command_runner.go:130] ! I0603 12:23:03.929541       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:08.859417   10844 command_runner.go:130] ! I0603 12:23:03.981652       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:08.859455   10844 command_runner.go:130] ! I0603 12:23:13.960621       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:08.859494   10844 command_runner.go:130] ! I0603 12:23:13.960663       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:08.859533   10844 command_runner.go:130] ! I0603 12:23:13.960672       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:08.859533   10844 command_runner.go:130] ! I0603 12:23:13.960922       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:08.859591   10844 command_runner.go:130] ! I0603 12:23:13.960933       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:08.859591   10844 command_runner.go:130] ! I0603 12:23:13.982079       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:08.859615   10844 command_runner.go:130] ! I0603 12:23:13.983455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:08.859615   10844 command_runner.go:130] ! I0603 12:23:13.983548       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:08.859615   10844 command_runner.go:130] ! E0603 12:23:14.000699       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:08.859615   10844 command_runner.go:130] ! I0603 12:23:14.000725       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:08.859724   10844 command_runner.go:130] ! I0603 12:23:14.000737       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:08.859741   10844 command_runner.go:130] ! I0603 12:23:14.000744       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:08.859741   10844 command_runner.go:130] ! I0603 12:23:14.014097       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:08.859802   10844 command_runner.go:130] ! I0603 12:23:14.014549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.014579       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.039289       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.039520       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.039555       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.066064       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.066460       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.067547       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.080694       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.080928       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.080942       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.090915       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.091127       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.112300       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.112981       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.113168       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.115290       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.115472       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.115914       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.116287       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.138094       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.138554       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.138571       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.156457       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.157066       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.157201       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.299010       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.299494       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:08.860386   10844 command_runner.go:130] ! I0603 12:23:14.299867       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:08.860386   10844 command_runner.go:130] ! I0603 12:23:14.448653       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:08.860386   10844 command_runner.go:130] ! I0603 12:23:14.448790       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.448807       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.598920       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.599459       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.599613       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.747435       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.747595       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.747608       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.747617       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.794967       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.795092       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.795473       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.795623       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.796055       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.947799       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.947966       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:14.948148       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:15.253614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:15.253800       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:15.253851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:08.860746   10844 command_runner.go:130] ! W0603 12:23:15.253890       1 shared_informer.go:597] resyncPeriod 20h27m39.878927139s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:08.860746   10844 command_runner.go:130] ! I0603 12:23:15.254123       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:08.861152   10844 command_runner.go:130] ! I0603 12:23:15.254392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.254514       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.255105       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.255639       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.255930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:08.861296   10844 command_runner.go:130] ! I0603 12:23:15.256059       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:08.861296   10844 command_runner.go:130] ! I0603 12:23:15.256381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.256652       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.256978       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.257200       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.257574       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:08.861452   10844 command_runner.go:130] ! I0603 12:23:15.257864       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:08.861506   10844 command_runner.go:130] ! I0603 12:23:15.258216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:08.861525   10844 command_runner.go:130] ! W0603 12:23:15.258585       1 shared_informer.go:597] resyncPeriod 18h8m55.919288475s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:08.861525   10844 command_runner.go:130] ! I0603 12:23:15.258823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:08.861581   10844 command_runner.go:130] ! I0603 12:23:15.258977       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:08.861581   10844 command_runner.go:130] ! I0603 12:23:15.259197       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:08.861581   10844 command_runner.go:130] ! I0603 12:23:15.259267       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:08.861641   10844 command_runner.go:130] ! I0603 12:23:15.259531       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.259645       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.259859       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.400049       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.400251       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.400362       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.550028       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.550108       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.550118       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:08.861902   10844 command_runner.go:130] ! I0603 12:23:15.744039       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:08.861902   10844 command_runner.go:130] ! I0603 12:23:15.744209       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:08.861902   10844 command_runner.go:130] ! I0603 12:23:15.744288       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:08.861961   10844 command_runner.go:130] ! I0603 12:23:15.744381       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:08.861961   10844 command_runner.go:130] ! E0603 12:23:15.795003       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:08.861961   10844 command_runner.go:130] ! I0603 12:23:15.795251       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:08.862042   10844 command_runner.go:130] ! I0603 12:23:15.951102       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:08.862042   10844 command_runner.go:130] ! I0603 12:23:15.951175       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:08.862042   10844 command_runner.go:130] ! I0603 12:23:15.951186       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:08.862097   10844 command_runner.go:130] ! I0603 12:23:16.103214       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:08.862157   10844 command_runner.go:130] ! I0603 12:23:16.103538       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:08.862184   10844 command_runner.go:130] ! I0603 12:23:16.103703       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:08.862184   10844 command_runner.go:130] ! I0603 12:23:16.152626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:08.862236   10844 command_runner.go:130] ! I0603 12:23:16.152712       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:08.862317   10844 command_runner.go:130] ! I0603 12:23:16.153331       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:08.862317   10844 command_runner.go:130] ! I0603 12:23:16.153697       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:08.862317   10844 command_runner.go:130] ! I0603 12:23:16.153983       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:08.862377   10844 command_runner.go:130] ! I0603 12:23:16.154153       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:08.862377   10844 command_runner.go:130] ! I0603 12:23:16.154254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862439   10844 command_runner.go:130] ! I0603 12:23:16.154552       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862439   10844 command_runner.go:130] ! I0603 12:23:16.155315       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:08.862510   10844 command_runner.go:130] ! I0603 12:23:16.155447       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:08.862510   10844 command_runner.go:130] ! I0603 12:23:16.155494       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862510   10844 command_runner.go:130] ! I0603 12:23:16.156193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862575   10844 command_runner.go:130] ! I0603 12:23:16.156626       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:08.862597   10844 command_runner.go:130] ! I0603 12:23:16.156664       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:08.862632   10844 command_runner.go:130] ! I0603 12:23:16.298448       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:08.862632   10844 command_runner.go:130] ! I0603 12:23:16.298743       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:08.862692   10844 command_runner.go:130] ! I0603 12:23:16.298803       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:08.862692   10844 command_runner.go:130] ! I0603 12:23:16.457482       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:08.862692   10844 command_runner.go:130] ! I0603 12:23:16.458106       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:08.862749   10844 command_runner.go:130] ! I0603 12:23:16.458255       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:08.862749   10844 command_runner.go:130] ! I0603 12:23:16.603442       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:08.862773   10844 command_runner.go:130] ! I0603 12:23:16.603819       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:08.862801   10844 command_runner.go:130] ! I0603 12:23:16.603900       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:08.862801   10844 command_runner.go:130] ! I0603 12:23:16.795254       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:08.862837   10844 command_runner.go:130] ! I0603 12:23:16.795875       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:08.862876   10844 command_runner.go:130] ! I0603 12:23:16.948611       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:08.862876   10844 command_runner.go:130] ! I0603 12:23:16.948652       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:08.862922   10844 command_runner.go:130] ! I0603 12:23:16.948726       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:08.862922   10844 command_runner.go:130] ! I0603 12:23:16.949131       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:08.862981   10844 command_runner.go:130] ! I0603 12:23:17.206218       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:08.863005   10844 command_runner.go:130] ! I0603 12:23:17.206341       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:08.863051   10844 command_runner.go:130] ! I0603 12:23:17.206354       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:08.863078   10844 command_runner.go:130] ! I0603 12:23:17.443986       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:08.863138   10844 command_runner.go:130] ! I0603 12:23:17.444026       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:08.863138   10844 command_runner.go:130] ! I0603 12:23:17.444652       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:08.863186   10844 command_runner.go:130] ! I0603 12:23:17.444677       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:08.863186   10844 command_runner.go:130] ! I0603 12:23:17.702103       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:08.863214   10844 command_runner.go:130] ! I0603 12:23:17.702517       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:08.863214   10844 command_runner.go:130] ! I0603 12:23:17.702550       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:08.863410   10844 command_runner.go:130] ! I0603 12:23:17.851156       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:08.863438   10844 command_runner.go:130] ! I0603 12:23:17.851357       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:08.863438   10844 command_runner.go:130] ! I0603 12:23:17.851370       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.000740       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.003147       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.003208       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.013736       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:08.863552   10844 command_runner.go:130] ! I0603 12:23:18.042698       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:08.863552   10844 command_runner.go:130] ! I0603 12:23:18.049024       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:08.863613   10844 command_runner.go:130] ! I0603 12:23:18.049393       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:08.863613   10844 command_runner.go:130] ! I0603 12:23:18.049619       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:08.863643   10844 command_runner.go:130] ! I0603 12:23:18.052020       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.052116       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.058451       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.063949       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.063997       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.064022       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.064027       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.064033       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.076606       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.097633       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.097738       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098286       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098375       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098877       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.100321       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.100587       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.103320       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.103450       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.103468       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.107067       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.108430       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.112806       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.113161       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.114212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400" podCIDRs=["10.244.0.0/24"]
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.114620       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.116662       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.120085       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.129657       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.139133       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.141026       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.152060       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.154508       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.154683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.156204       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.157708       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.159229       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.202824       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.204977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.213840       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.215208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.245546       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.260135       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.303335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.744986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.745263       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.809407       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.424454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="514.197479ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 05:47:08.864927   10844 command_runner.go:130] ! I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:08.864927   10844 command_runner.go:130] ! I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.890210   10844 logs.go:123] Gathering logs for kindnet [a00a9dc2a937] ...
	I0603 05:47:08.890210   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00a9dc2a937"
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:18.810917       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:18.811413       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:18.811451       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826592       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826645       1 main.go:227] handling current node
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826658       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826665       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.827203       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.827288       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840141       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840209       1 main.go:227] handling current node
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840223       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840230       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:38.840630       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:38.840646       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850171       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850276       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850292       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850299       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850729       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850876       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.856606       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.857034       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.857296       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.857510       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.858637       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.858677       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864801       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864826       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864838       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.865310       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.865474       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872391       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872568       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872599       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872624       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872804       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872959       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886324       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886350       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886362       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886368       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886918       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886985       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.893626       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.893899       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.893916       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.894181       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.894556       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.894647       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910837       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910878       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910891       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910896       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.911015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.911041       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926167       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926268       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926284       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926291       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.927007       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.927131       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937101       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937131       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937150       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937284       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937292       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:18.943292       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:18.943378       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943532       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943590       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950687       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950853       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950870       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950878       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.951068       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.951084       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.965710       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967355       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967377       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967388       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967555       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967566       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.975988       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976117       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976134       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976142       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976817       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976852       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.991312       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.991846       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.991984       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.992011       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.992262       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.992331       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999119       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999230       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999369       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999483       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999604       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999616       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007514       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007620       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007635       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007642       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007957       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007986       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.013983       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014066       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014081       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014088       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014429       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025261       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025288       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025300       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025306       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025682       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025828       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.038248       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.039013       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.039143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.039662       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.040380       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.040438       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052205       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052297       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052328       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052410       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052577       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052607       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059926       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059974       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059988       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059995       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.060515       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.060532       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.069521       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.069928       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.070204       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.070309       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.070978       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.071168       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084376       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084614       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084689       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084804       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.085015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.085100       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098298       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098419       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098435       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098444       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098942       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.099083       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109724       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109872       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109894       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.110382       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.110466       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:59.116904       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117061       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117150       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117281       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117621       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117713       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.133187       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.133597       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.133807       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.134149       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.134720       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.134902       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141246       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141257       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141386       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141456       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151018       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151126       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151147       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151156       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151810       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.152019       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165415       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165510       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165524       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165530       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.166173       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.166270       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181247       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181371       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181387       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181412       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181852       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.182176       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189418       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189528       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189544       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189552       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.190394       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.190480       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197274       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197415       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197432       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197440       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197851       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204632       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204793       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204826       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204835       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.205144       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.205251       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213406       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213503       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213518       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213524       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213644       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213655       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229128       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229187       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229199       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229205       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229332       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229344       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245014       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245069       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245084       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245091       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245355       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245382       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252267       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252359       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252371       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.260367       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.260444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270366       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270476       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270490       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270544       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270869       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.271060       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:19.277515       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:19.277615       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.277631       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.277638       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.278259       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.278516       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287007       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287102       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287117       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287124       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287246       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287329       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293747       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293802       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293812       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.294185       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.294225       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304527       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304629       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304643       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304651       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304863       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.305107       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314751       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314846       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314860       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314866       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314992       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.315004       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321649       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321868       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321895       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.322451       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.322470       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336642       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336845       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336864       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336872       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.337002       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.337011       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350352       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350468       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350484       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350493       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350956       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.351085       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366296       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366357       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366370       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366518       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366548       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371036       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371174       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371189       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371218       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371340       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371368       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.386603       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387024       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387122       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387140       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387625       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387909       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401524       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401658       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401746       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.402106       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.402238       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408360       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408404       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408417       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408423       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408530       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408541       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414703       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414865       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414889       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.415393       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.415619       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.415702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:39.426331       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426441       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426455       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426462       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426731       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426795       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436724       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436739       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436745       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.437162       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.437250       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449377       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449801       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449916       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464583       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464690       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464705       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464713       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.465435       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.465537       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.473928       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474029       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474044       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474052       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474454       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474552       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480280       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480469       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480606       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480686       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.481023       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.481213       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492462       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492634       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492669       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492711       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492930       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.956211   10844 logs.go:123] Gathering logs for dmesg ...
	I0603 05:47:08.956211   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 05:47:08.982439   10844 command_runner.go:130] > [Jun 3 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.129332] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.024453] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 05:47:08.982604   10844 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 05:47:08.982604   10844 command_runner.go:130] > [  +0.058085] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 05:47:08.982604   10844 command_runner.go:130] > [  +0.021687] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 05:47:08.982604   10844 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 05:47:08.982668   10844 command_runner.go:130] > [Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	I0603 05:47:08.984668   10844 logs.go:123] Gathering logs for coredns [4241e2ff2dfe] ...
	I0603 05:47:08.984668   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4241e2ff2dfe"
	I0603 05:47:09.011241   10844 command_runner.go:130] > .:53
	I0603 05:47:09.011241   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:09.011241   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:09.011241   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:09.011241   10844 command_runner.go:130] > [INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	I0603 05:47:09.011241   10844 logs.go:123] Gathering logs for kube-proxy [09616a16042d] ...
	I0603 05:47:09.011241   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09616a16042d"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:09.039970   10844 command_runner.go:130] ! I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:09.040068   10844 command_runner.go:130] ! I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 05:47:09.040068   10844 command_runner.go:130] ! I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:09.040126   10844 command_runner.go:130] ! I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:09.040126   10844 command_runner.go:130] ! I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:09.040126   10844 command_runner.go:130] ! I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:09.042230   10844 logs.go:123] Gathering logs for describe nodes ...
	I0603 05:47:09.042230   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 05:47:09.252742   10844 command_runner.go:130] > Name:               multinode-316400
	I0603 05:47:09.252742   10844 command_runner.go:130] > Roles:              control-plane
	I0603 05:47:09.252742   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 05:47:09.252742   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:09.252742   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	I0603 05:47:09.252742   10844 command_runner.go:130] > Taints:             <none>
	I0603 05:47:09.252742   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:09.253724   10844 command_runner.go:130] > Lease:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400
	I0603 05:47:09.253724   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:09.253724   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:47:02 +0000
	I0603 05:47:09.253724   10844 command_runner.go:130] > Conditions:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 05:47:09.253724   10844 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 05:47:09.253724   10844 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 05:47:09.253724   10844 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	I0603 05:47:09.253724   10844 command_runner.go:130] > Addresses:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   InternalIP:  172.17.95.88
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Hostname:    multinode-316400
	I0603 05:47:09.253724   10844 command_runner.go:130] > Capacity:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.253724   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.253724   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.253724   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.253724   10844 command_runner.go:130] > System Info:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Machine ID:                 babca97119de4d6fa999cc452dbf962d
	I0603 05:47:09.253724   10844 command_runner.go:130] >   System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:09.253724   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:09.253724   10844 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 05:47:09.253724   10844 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 05:47:09.253724   10844 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 05:47:09.253724   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-pm79t                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 etcd-multinode-316400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         69s
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kindnet-4hpsl                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-316400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         67s
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-316400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-proxy-ks64x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-316400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0603 05:47:09.253724   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Resource           Requests     Limits
	I0603 05:47:09.253724   10844 command_runner.go:130] >   --------           --------     ------
	I0603 05:47:09.253724   10844 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0603 05:47:09.253724   10844 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0603 05:47:09.254698   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0603 05:47:09.254698   10844 command_runner.go:130] > Events:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-316400 status is now: NodeReady
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:09.254698   10844 command_runner.go:130] > Name:               multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:09.254698   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:09.254698   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:09.254698   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	I0603 05:47:09.254698   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:09.254698   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:09.254698   10844 command_runner.go:130] > Lease:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:09.254698   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:47 +0000
	I0603 05:47:09.254698   10844 command_runner.go:130] > Conditions:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:09.254698   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] > Addresses:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   InternalIP:  172.17.94.201
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Hostname:    multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] > Capacity:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.254698   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.254698   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.254698   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.254698   10844 command_runner.go:130] > System Info:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	I0603 05:47:09.254698   10844 command_runner.go:130] >   System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:09.254698   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:09.254698   10844 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 05:47:09.254698   10844 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 05:47:09.254698   10844 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 05:47:09.254698   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-hmxqp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:09.255689   10844 command_runner.go:130] >   kube-system                 kindnet-789v5              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0603 05:47:09.255689   10844 command_runner.go:130] >   kube-system                 kube-proxy-z26hc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:09.255689   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:09.255689   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:09.255689   10844 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 05:47:09.255689   10844 command_runner.go:130] > Events:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-316400-m02 status is now: NodeNotReady
	I0603 05:47:09.255689   10844 command_runner.go:130] > Name:               multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:09.255689   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:09.255689   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:09.255689   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	I0603 05:47:09.255689   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:09.255689   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:09.255689   10844 command_runner.go:130] > Lease:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:09.255689   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	I0603 05:47:09.255689   10844 command_runner.go:130] > Conditions:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:09.255689   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] > Addresses:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   InternalIP:  172.17.87.60
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Hostname:    multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] > Capacity:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.255689   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.255689   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.255689   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.255689   10844 command_runner.go:130] > System Info:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	I0603 05:47:09.255689   10844 command_runner.go:130] >   System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:09.255689   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:09.256698   10844 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 05:47:09.256698   10844 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 05:47:09.256698   10844 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:09.256698   10844 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 05:47:09.256698   10844 command_runner.go:130] >   kube-system                 kindnet-2g66r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0603 05:47:09.256698   10844 command_runner.go:130] >   kube-system                 kube-proxy-dl97g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0603 05:47:09.256698   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:09.256698   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:09.256698   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:09.256698   10844 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 05:47:09.256698   10844 command_runner.go:130] > Events:
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 05:47:09.256698   10844 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  Starting                 5m38s                  kube-proxy       
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeReady                5m33s                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeNotReady             3m56s                  node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:09.267716   10844 logs.go:123] Gathering logs for coredns [8280b3904678] ...
	I0603 05:47:09.267716   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8280b3904678"
	I0603 05:47:09.301978   10844 command_runner.go:130] > .:53
	I0603 05:47:09.301978   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:09.301978   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:09.302102   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:09.302102   10844 command_runner.go:130] > [INFO] 127.0.0.1:42160 - 49231 "HINFO IN 7758649785632377755.6167658315586765337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046714522s
	I0603 05:47:09.302102   10844 command_runner.go:130] > [INFO] 10.244.1.2:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279598s
	I0603 05:47:09.302102   10844 command_runner.go:130] > [INFO] 10.244.1.2:58454 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208411566s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.1.2:41741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.13626297s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.1.2:34878 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.105138942s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.0.3:55537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268797s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.0.3:46426 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000881s
	I0603 05:47:09.302262   10844 command_runner.go:130] > [INFO] 10.244.0.3:52879 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174998s
	I0603 05:47:09.302357   10844 command_runner.go:130] > [INFO] 10.244.0.3:43420 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000100699s
	I0603 05:47:09.302408   10844 command_runner.go:130] > [INFO] 10.244.1.2:58392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115599s
	I0603 05:47:09.302427   10844 command_runner.go:130] > [INFO] 10.244.1.2:44885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024455563s
	I0603 05:47:09.302427   10844 command_runner.go:130] > [INFO] 10.244.1.2:42255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000337996s
	I0603 05:47:09.302493   10844 command_runner.go:130] > [INFO] 10.244.1.2:41386 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245097s
	I0603 05:47:09.302493   10844 command_runner.go:130] > [INFO] 10.244.1.2:55181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012426179s
	I0603 05:47:09.302493   10844 command_runner.go:130] > [INFO] 10.244.1.2:35256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164099s
	I0603 05:47:09.302564   10844 command_runner.go:130] > [INFO] 10.244.1.2:57960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110199s
	I0603 05:47:09.302564   10844 command_runner.go:130] > [INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	I0603 05:47:09.302655   10844 command_runner.go:130] > [INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	I0603 05:47:09.302689   10844 command_runner.go:130] > [INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	I0603 05:47:09.302689   10844 command_runner.go:130] > [INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	I0603 05:47:09.302744   10844 command_runner.go:130] > [INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 05:47:09.306115   10844 logs.go:123] Gathering logs for Docker ...
	I0603 05:47:09.306115   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.771561443Z" level=info msg="Starting up"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.772532063Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.773624286Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.808811320Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832632417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832678118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832736520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832759220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833244930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833408234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833576137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833613138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833628938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833638438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.834164449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.835025267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838417938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838538341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838679444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838769945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839497061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839606563Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839624563Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845634889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845777492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845800892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845816092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845839393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845906994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846346204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846529007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846620809Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846640810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846654910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846667810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846680811Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846694511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846708411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846721811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846733912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846744912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846773112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846788913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846800513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846828814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846839914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846851514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846862614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846874615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846886615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846899615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846955316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846981817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846994617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847010117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847031418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847043818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847054818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847167021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847253922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847272023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847284523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847328424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847344024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847358325Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847619130Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847749533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847791734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847827434Z" level=info msg="containerd successfully booted in 0.041960s"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:18 multinode-316400 dockerd[653]: time="2024-06-03T12:45:18.826654226Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.061854651Z" level=info msg="Loading containers: start."
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.457966557Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.535734595Z" level=info msg="Loading containers: done."
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.564526187Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.565436112Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624671041Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624909048Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.830891929Z" level=info msg="Processing signal 'terminated'"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 systemd[1]: Stopping Docker Application Container Engine...
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.834353661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835003667Z" level=info msg="Daemon shutdown complete"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835050568Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835251069Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: docker.service: Deactivated successfully.
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Stopped Docker Application Container Engine.
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.915575270Z" level=info msg="Starting up"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.916682280Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.918008093Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1054
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.949666883Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972231590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972400191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972438091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972452692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972476692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972488892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972615793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972703794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972759294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972796595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972955396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975272817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975362818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975484219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975568720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975596620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975613521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975624221Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975878823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976092925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976118125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976134225Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976151125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976547129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976675630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976808532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976873932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976891332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976903432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976914332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976926833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976940833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976953033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976964333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976974233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977000233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977014733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977026033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977037834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977048934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977060334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977071734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977082834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977094934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977108234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977131234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977142235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977155935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977174635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977186435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977200035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977321036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977450137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977475038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977491338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977502538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977515638Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977525838Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977793041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977944442Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977993342Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.978082843Z" level=info msg="containerd successfully booted in 0.029905s"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.958072125Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.992700342Z" level=info msg="Loading containers: start."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.284992921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.371138910Z" level=info msg="Loading containers: done."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397139049Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397280650Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.446056397Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.451246244Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Loaded network plugin cni"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4hrc6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e\""
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-pm79t_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4\""
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729841851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729937752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.730811260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.732365774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831787585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831902586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831956587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.832202689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912447024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912562925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912807128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31bce861be7b718722ced8a5abaaaf80e01691edf1873a82a8467609ec04d725/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948298553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948519555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948541855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948688056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5938c827a45b5720a54e096dfe79ff973a8724c39f2dfa24cf2bc4e1f3a14c6e/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211361864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211466465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211486965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211585266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.402470615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403083421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403253922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.410900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474478075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.475925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486666687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486786488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486800688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.487211092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566084538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566367341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566479442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.567551052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.582198686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586189923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586494625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.587318633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636541684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636617385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636635485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636992688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.129414501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130210008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130291809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130470711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147517467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147958771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148118573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148818379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423300695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423802099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.424025901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.427457533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1048]: time="2024-06-03T12:46:32.704571107Z" level=info msg="ignoring event" container=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705364020Z" level=info msg="shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705622124Z" level=warning msg="cleaning up after shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705874328Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.728397491Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129026230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129403835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129427335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129696138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309701115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309935818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309957118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.310113120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316797286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316993688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317155090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317526994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899305562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899391863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899429263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899555364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.936994844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937073745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937090545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937338347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.378571   10844 logs.go:123] Gathering logs for container status ...
	I0603 05:47:09.378571   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 05:47:09.443818   10844 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 05:47:09.443818   10844 command_runner.go:130] > c57e529e14789       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	I0603 05:47:09.443818   10844 command_runner.go:130] > 4241e2ff2dfe8       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:09.443818   10844 command_runner.go:130] > e1365acc9d8f5       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:09.443818   10844 command_runner.go:130] > 3a08a76e2a79b       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	I0603 05:47:09.443818   10844 command_runner.go:130] > eeba3616d7005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:09.443818   10844 command_runner.go:130] > 09616a16042d3       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	I0603 05:47:09.443818   10844 command_runner.go:130] > a9b10f4d479ac       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > ef3c014848675       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > 334bb0174b55e       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > cbaa09a85a643       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	I0603 05:47:09.443818   10844 command_runner.go:130] > 8280b39046781       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:09.443818   10844 command_runner.go:130] > a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	I0603 05:47:09.443818   10844 command_runner.go:130] > ad08c7b8f3aff       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	I0603 05:47:09.443818   10844 command_runner.go:130] > f39be6db7a1f8       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	I0603 05:47:09.444348   10844 command_runner.go:130] > 3d7dc29a57912       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	I0603 05:47:09.446243   10844 logs.go:123] Gathering logs for kube-scheduler [334bb0174b55] ...
	I0603 05:47:09.446774   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 334bb0174b55"
	I0603 05:47:09.477006   10844 command_runner.go:130] ! I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:09.477006   10844 command_runner.go:130] ! W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:09.478017   10844 command_runner.go:130] ! W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:09.478017   10844 command_runner.go:130] ! W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:09.478017   10844 command_runner.go:130] ! W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:09.478089   10844 command_runner.go:130] ! I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:09.478242   10844 command_runner.go:130] ! I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.478370   10844 command_runner.go:130] ! I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:09.478456   10844 command_runner.go:130] ! I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:09.478520   10844 command_runner.go:130] ! I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:09.478520   10844 command_runner.go:130] ! I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:09.478589   10844 command_runner.go:130] ! I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:09.480228   10844 logs.go:123] Gathering logs for kube-proxy [ad08c7b8f3af] ...
	I0603 05:47:09.480787   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad08c7b8f3af"
	I0603 05:47:09.512305   10844 command_runner.go:130] ! I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:09.512305   10844 command_runner.go:130] ! I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:09.515098   10844 logs.go:123] Gathering logs for kube-controller-manager [cbaa09a85a64] ...
	I0603 05:47:09.515098   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa09a85a64"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:57.870752       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.526588       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.526702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.533907       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.534542       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.535842       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.536233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.398949       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.399900       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435010       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435076       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435752       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.494257       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.494484       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.501595       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:09.545117   10844 command_runner.go:130] ! E0603 12:46:02.503053       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.503101       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.506314       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.511488       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.511970       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.516592       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.520190       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.521481       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.521500       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.522419       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.522531       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.522539       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.527263       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.527284       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.528477       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.528534       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.528980       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.529023       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.529029       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.532164       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.532658       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.532787       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.537982       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.538156       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.540497       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.545135       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.545508       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.546501       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.548466       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.551407       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.551542       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552105       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552249       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552280       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552956       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.564031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.564743       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.565277       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.565424       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.571139       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.571233       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.572399       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.572466       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.573181       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.573205       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.574887       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.582200       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.582364       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.582373       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.588602       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.591240       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.612297       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.612483       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.613381       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.623612       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.628478       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.628951       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.629235       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.652905       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.652988       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.653246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673155       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673508       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.674494       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.674611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.674812       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675397       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675422       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675675       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676018       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676230       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676428       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676474       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676746       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676879       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676991       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.677057       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.677159       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.677261       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.679809       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.680265       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.680400       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.696376       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.697035       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.697121       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.699870       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.700035       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.700365       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.707376       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.708196       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.708250       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.715601       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.716125       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.716429       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.725280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.725365       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.726123       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.734528       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.734935       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.735117       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.737491       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.737773       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.737858       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743270       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743591       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743640       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743648       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748185       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748266       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748498       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748532       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748553       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749140       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749625       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749663       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749897       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.750105       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.750568       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.753301       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.753662       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.753804       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.754382       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.754576       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.757083       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.757524       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.758174       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.760247       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.760686       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.760938       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.772698       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.772922       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.774148       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:09.549116   10844 command_runner.go:130] ! E0603 12:46:12.775996       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.776034       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.779294       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.779452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.780268       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783043       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783634       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783847       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783962       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.792655       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.801373       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.817303       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.821609       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.829238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.832397       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.832809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833264       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833561       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.835226       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.840542       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.846790       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.849319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.849497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.851129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.851147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.852109       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.854406       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.854923       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.867259       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.873525       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.874696       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.876061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.880612       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.880650       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.884270       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.896673       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.897786       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.909588       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.922202       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.923485       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.923685       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924158       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924516       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924851       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924952       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.928113       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.929667       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.959523       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.963250       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.029808       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.030293       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.038277       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.044424       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064984       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.077763       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.083477       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.093778       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.100897       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.133780       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.164944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.004317ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.168328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.004µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.172600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="212.304157ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.173022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.001µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.502035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.535943       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 05:47:12.092494   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:47:12.122524   10844 command_runner.go:130] > 1862
	I0603 05:47:12.122524   10844 api_server.go:72] duration metric: took 1m6.8766895s to wait for apiserver process to appear ...
	I0603 05:47:12.122524   10844 api_server.go:88] waiting for apiserver healthz status ...
	I0603 05:47:12.132404   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 05:47:12.155289   10844 command_runner.go:130] > a9b10f4d479a
	I0603 05:47:12.155539   10844 logs.go:276] 1 containers: [a9b10f4d479a]
	I0603 05:47:12.165042   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 05:47:12.187934   10844 command_runner.go:130] > ef3c01484867
	I0603 05:47:12.188577   10844 logs.go:276] 1 containers: [ef3c01484867]
	I0603 05:47:12.198275   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 05:47:12.220955   10844 command_runner.go:130] > 4241e2ff2dfe
	I0603 05:47:12.220955   10844 command_runner.go:130] > 8280b3904678
	I0603 05:47:12.222717   10844 logs.go:276] 2 containers: [4241e2ff2dfe 8280b3904678]
	I0603 05:47:12.231853   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 05:47:12.257031   10844 command_runner.go:130] > 334bb0174b55
	I0603 05:47:12.257031   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:47:12.257724   10844 logs.go:276] 2 containers: [334bb0174b55 f39be6db7a1f]
	I0603 05:47:12.267515   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 05:47:12.291045   10844 command_runner.go:130] > 09616a16042d
	I0603 05:47:12.291045   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:47:12.292036   10844 logs.go:276] 2 containers: [09616a16042d ad08c7b8f3af]
	I0603 05:47:12.301908   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 05:47:12.326589   10844 command_runner.go:130] > cbaa09a85a64
	I0603 05:47:12.326589   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:47:12.326589   10844 logs.go:276] 2 containers: [cbaa09a85a64 3d7dc29a5791]
	I0603 05:47:12.336708   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 05:47:12.361999   10844 command_runner.go:130] > 3a08a76e2a79
	I0603 05:47:12.361999   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:47:12.363050   10844 logs.go:276] 2 containers: [3a08a76e2a79 a00a9dc2a937]
	I0603 05:47:12.363087   10844 logs.go:123] Gathering logs for Docker ...
	I0603 05:47:12.363153   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:12.401848   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:12.401883   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.401883   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401910   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 05:47:12.401910   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401910   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.401971   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.401999   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.401999   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.401999   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:12.402061   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.402166   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 05:47:12.402166   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.402234   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.402234   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.771561443Z" level=info msg="Starting up"
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.772532063Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.773624286Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.808811320Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832632417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832678118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832736520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832759220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833244930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833408234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833576137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833613138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833628938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833638438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.834164449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.835025267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838417938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838538341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838679444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838769945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839497061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839606563Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839624563Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845634889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845777492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845800892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:12.402936   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845816092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:12.402974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845839393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:12.402974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845906994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:12.403008   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846346204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.403008   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846529007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846620809Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846640810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846654910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846667810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403150   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846680811Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403150   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846694511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403150   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846708411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403220   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846721811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403220   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846733912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403220   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846744912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403320   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846773112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846788913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846800513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846828814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846839914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846851514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846862614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846874615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846886615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403511   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846899615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403511   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846955316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403511   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846981817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403570   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846994617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403591   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847010117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:12.403591   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847031418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403654   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847043818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403654   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847054818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:12.403716   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847167021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:12.403740   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847253922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:12.403740   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847272023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:12.403818   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847284523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:12.403818   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847328424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403872   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847344024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:12.403897   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847358325Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:12.403897   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847619130Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:12.403897   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847749533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:12.403974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847791734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:12.403974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847827434Z" level=info msg="containerd successfully booted in 0.041960s"
	I0603 05:47:12.403974   10844 command_runner.go:130] > Jun 03 12:45:18 multinode-316400 dockerd[653]: time="2024-06-03T12:45:18.826654226Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:12.404027   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.061854651Z" level=info msg="Loading containers: start."
	I0603 05:47:12.404052   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.457966557Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:12.404052   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.535734595Z" level=info msg="Loading containers: done."
	I0603 05:47:12.404052   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.564526187Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:12.404110   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.565436112Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:12.404110   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624671041Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:12.404110   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624909048Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:12.404198   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:12.404198   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.830891929Z" level=info msg="Processing signal 'terminated'"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 systemd[1]: Stopping Docker Application Container Engine...
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.834353661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835003667Z" level=info msg="Daemon shutdown complete"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835050568Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835251069Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: docker.service: Deactivated successfully.
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Stopped Docker Application Container Engine.
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.915575270Z" level=info msg="Starting up"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.916682280Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.918008093Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1054
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.949666883Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972231590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972400191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972438091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972452692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972476692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972488892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972615793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972703794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972759294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972796595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972955396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975272817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404763   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975362818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404763   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975484219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404763   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975568720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:12.404879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975596620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:12.404879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975613521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:12.404910   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975624221Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975878823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976092925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976118125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976134225Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:12.405028   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976151125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:12.405028   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:12.405028   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976547129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.405100   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976675630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.405100   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976808532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:12.405100   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976873932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:12.405169   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976891332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405169   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976903432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405169   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976914332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976926833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976940833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976953033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976964333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405368   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976974233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405368   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977000233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405395   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977014733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405395   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977026033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405455   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977037834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405455   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977048934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405455   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977060334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405521   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977071734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405521   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977082834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405521   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977094934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405583   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977108234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405608   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405608   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977131234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405660   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977142235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405685   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977155935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:12.405685   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977174635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405737   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977186435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405762   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977200035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:12.405762   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977321036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:12.405830   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977450137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:12.405830   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977475038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:12.405830   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977491338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:12.405898   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977502538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405987   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977515638Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:12.406044   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977525838Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:12.406069   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977793041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977944442Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977993342Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.978082843Z" level=info msg="containerd successfully booted in 0.029905s"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.958072125Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.992700342Z" level=info msg="Loading containers: start."
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.284992921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.371138910Z" level=info msg="Loading containers: done."
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397139049Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397280650Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.446056397Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.451246244Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Loaded network plugin cni"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4hrc6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e\""
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-pm79t_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4\""
	I0603 05:47:12.406628   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729841851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.406628   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729937752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.406628   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.730811260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406707   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.732365774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406804   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831787585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.406845   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831902586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.406845   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831956587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406950   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.832202689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406980   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912447024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407169   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407204   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912562925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407226   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912807128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407261   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31bce861be7b718722ced8a5abaaaf80e01691edf1873a82a8467609ec04d725/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407261   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948298553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407326   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948519555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948541855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948688056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5938c827a45b5720a54e096dfe79ff973a8724c39f2dfa24cf2bc4e1f3a14c6e/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211361864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211466465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211486965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211585266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.402470615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403083421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403253922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.410900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474478075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.475925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486666687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486786488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486800688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407896   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.487211092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407936   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566084538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566367341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566479442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.567551052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.582198686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586189923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586494625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.587318633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636541684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636617385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636635485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636992688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.129414501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130210008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130291809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130470711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147517467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147958771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148118573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148818379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.408547   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423300695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408547   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423802099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408598   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.424025901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408598   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.427457533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408658   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1048]: time="2024-06-03T12:46:32.704571107Z" level=info msg="ignoring event" container=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 05:47:12.408695   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705364020Z" level=info msg="shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:12.408717   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705622124Z" level=warning msg="cleaning up after shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:12.408717   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705874328Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 05:47:12.408776   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.728397491Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0603 05:47:12.408776   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129026230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408817   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129403835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408817   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129427335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129696138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309701115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309935818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309957118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.310113120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316797286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316993688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317155090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317526994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899305562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899391863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899429263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899555364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.936994844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937073745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937090545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937338347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409459   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409509   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409509   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.442628   10844 logs.go:123] Gathering logs for container status ...
	I0603 05:47:12.442628   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 05:47:12.523948   10844 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 05:47:12.523948   10844 command_runner.go:130] > c57e529e14789       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	I0603 05:47:12.524640   10844 command_runner.go:130] > 4241e2ff2dfe8       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:12.524692   10844 command_runner.go:130] > e1365acc9d8f5       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:12.524692   10844 command_runner.go:130] > 3a08a76e2a79b       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	I0603 05:47:12.524731   10844 command_runner.go:130] > eeba3616d7005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:12.524793   10844 command_runner.go:130] > 09616a16042d3       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	I0603 05:47:12.524793   10844 command_runner.go:130] > a9b10f4d479ac       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > ef3c014848675       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > 334bb0174b55e       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > cbaa09a85a643       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	I0603 05:47:12.524793   10844 command_runner.go:130] > 8280b39046781       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:12.524793   10844 command_runner.go:130] > a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	I0603 05:47:12.524793   10844 command_runner.go:130] > ad08c7b8f3aff       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	I0603 05:47:12.524793   10844 command_runner.go:130] > f39be6db7a1f8       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > 3d7dc29a57912       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	I0603 05:47:12.527919   10844 logs.go:123] Gathering logs for coredns [4241e2ff2dfe] ...
	I0603 05:47:12.528034   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4241e2ff2dfe"
	I0603 05:47:12.557382   10844 command_runner.go:130] > .:53
	I0603 05:47:12.557382   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:12.557382   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:12.558387   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:12.558387   10844 command_runner.go:130] > [INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	I0603 05:47:12.559859   10844 logs.go:123] Gathering logs for kube-proxy [09616a16042d] ...
	I0603 05:47:12.559940   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09616a16042d"
	I0603 05:47:12.594122   10844 command_runner.go:130] ! I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:12.594553   10844 command_runner.go:130] ! I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:12.594553   10844 command_runner.go:130] ! I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:12.594599   10844 command_runner.go:130] ! I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.594599   10844 command_runner.go:130] ! I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 05:47:12.594625   10844 command_runner.go:130] ! I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:12.594625   10844 command_runner.go:130] ! I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:12.594625   10844 command_runner.go:130] ! I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:12.597435   10844 logs.go:123] Gathering logs for kube-controller-manager [3d7dc29a5791] ...
	I0603 05:47:12.597541   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d7dc29a5791"
	I0603 05:47:12.629115   10844 command_runner.go:130] ! I0603 12:22:58.709734       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:12.629335   10844 command_runner.go:130] ! I0603 12:22:59.476409       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:12.629335   10844 command_runner.go:130] ! I0603 12:22:59.477144       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.629384   10844 command_runner.go:130] ! I0603 12:22:59.479107       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.629384   10844 command_runner.go:130] ! I0603 12:22:59.479482       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.629418   10844 command_runner.go:130] ! I0603 12:22:59.480446       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:12.629418   10844 command_runner.go:130] ! I0603 12:22:59.480646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.879622       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.880293       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.880027       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.898013       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.898158       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.898213       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.919140       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.919340       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.919371       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.929290       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.929541       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.981652       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960621       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960663       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960672       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960922       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960933       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.982079       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.983455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.983548       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:12.629447   10844 command_runner.go:130] ! E0603 12:23:14.000699       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.000725       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.000737       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.000744       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.014097       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.014549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.014579       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.039289       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.039520       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.039555       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.066064       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.066460       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.067547       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.080694       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:12.629986   10844 command_runner.go:130] ! I0603 12:23:14.080928       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:12.629986   10844 command_runner.go:130] ! I0603 12:23:14.080942       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:12.630027   10844 command_runner.go:130] ! I0603 12:23:14.090915       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.630027   10844 command_runner.go:130] ! I0603 12:23:14.091127       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.630027   10844 command_runner.go:130] ! I0603 12:23:14.112300       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:12.630111   10844 command_runner.go:130] ! I0603 12:23:14.112981       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:12.630111   10844 command_runner.go:130] ! I0603 12:23:14.113168       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:12.630111   10844 command_runner.go:130] ! I0603 12:23:14.115290       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:12.630145   10844 command_runner.go:130] ! I0603 12:23:14.115472       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.115914       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.116287       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.138094       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.138554       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.138571       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.156457       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.157066       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.157201       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.299010       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.299494       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.299867       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.448653       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.448790       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.448807       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.598920       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.599459       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.599613       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747435       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747595       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747608       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747617       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.794967       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.795092       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.795473       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.795623       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.796055       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.947799       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.947966       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.948148       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:15.253614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:12.630709   10844 command_runner.go:130] ! I0603 12:23:15.253800       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:12.630709   10844 command_runner.go:130] ! I0603 12:23:15.253851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:12.630709   10844 command_runner.go:130] ! W0603 12:23:15.253890       1 shared_informer.go:597] resyncPeriod 20h27m39.878927139s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:12.630773   10844 command_runner.go:130] ! I0603 12:23:15.254123       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:12.630773   10844 command_runner.go:130] ! I0603 12:23:15.254392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:12.630773   10844 command_runner.go:130] ! I0603 12:23:15.254514       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:12.630845   10844 command_runner.go:130] ! I0603 12:23:15.255105       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:12.630845   10844 command_runner.go:130] ! I0603 12:23:15.255639       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:12.630893   10844 command_runner.go:130] ! I0603 12:23:15.255930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:12.630893   10844 command_runner.go:130] ! I0603 12:23:15.256059       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:12.630893   10844 command_runner.go:130] ! I0603 12:23:15.256381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:12.630972   10844 command_runner.go:130] ! I0603 12:23:15.256652       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:12.630972   10844 command_runner.go:130] ! I0603 12:23:15.256978       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:12.631019   10844 command_runner.go:130] ! I0603 12:23:15.257200       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.257574       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.257864       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.258216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! W0603 12:23:15.258585       1 shared_informer.go:597] resyncPeriod 18h8m55.919288475s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.258823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.258977       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259197       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259267       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259531       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259645       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259859       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.400049       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.400251       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.400362       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.550028       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.550108       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.550118       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744039       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744209       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744288       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744381       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:12.631050   10844 command_runner.go:130] ! E0603 12:23:15.795003       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.795251       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.951102       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.951175       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.951186       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:16.103214       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:16.103538       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:12.631611   10844 command_runner.go:130] ! I0603 12:23:16.103703       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:12.631611   10844 command_runner.go:130] ! I0603 12:23:16.152626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:12.631611   10844 command_runner.go:130] ! I0603 12:23:16.152712       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:12.631692   10844 command_runner.go:130] ! I0603 12:23:16.153331       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:12.631785   10844 command_runner.go:130] ! I0603 12:23:16.153697       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:12.631785   10844 command_runner.go:130] ! I0603 12:23:16.153983       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:12.631814   10844 command_runner.go:130] ! I0603 12:23:16.154153       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.631851   10844 command_runner.go:130] ! I0603 12:23:16.154254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631851   10844 command_runner.go:130] ! I0603 12:23:16.154552       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631890   10844 command_runner.go:130] ! I0603 12:23:16.155315       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:12.631954   10844 command_runner.go:130] ! I0603 12:23:16.155447       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.631954   10844 command_runner.go:130] ! I0603 12:23:16.155494       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631954   10844 command_runner.go:130] ! I0603 12:23:16.156193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631991   10844 command_runner.go:130] ! I0603 12:23:16.156626       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:12.632034   10844 command_runner.go:130] ! I0603 12:23:16.156664       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.632034   10844 command_runner.go:130] ! I0603 12:23:16.298448       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:12.632034   10844 command_runner.go:130] ! I0603 12:23:16.298743       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:12.632087   10844 command_runner.go:130] ! I0603 12:23:16.298803       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:12.632087   10844 command_runner.go:130] ! I0603 12:23:16.457482       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.632129   10844 command_runner.go:130] ! I0603 12:23:16.458106       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.632129   10844 command_runner.go:130] ! I0603 12:23:16.458255       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:12.632129   10844 command_runner.go:130] ! I0603 12:23:16.603442       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:12.632165   10844 command_runner.go:130] ! I0603 12:23:16.603819       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:12.632165   10844 command_runner.go:130] ! I0603 12:23:16.603900       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:12.632165   10844 command_runner.go:130] ! I0603 12:23:16.795254       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:12.632212   10844 command_runner.go:130] ! I0603 12:23:16.795875       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:12.632248   10844 command_runner.go:130] ! I0603 12:23:16.948611       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:12.632248   10844 command_runner.go:130] ! I0603 12:23:16.948652       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:12.632248   10844 command_runner.go:130] ! I0603 12:23:16.948726       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:12.632296   10844 command_runner.go:130] ! I0603 12:23:16.949131       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.206218       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.206341       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.206354       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.443986       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:12.632399   10844 command_runner.go:130] ! I0603 12:23:17.444026       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:12.632399   10844 command_runner.go:130] ! I0603 12:23:17.444652       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.632437   10844 command_runner.go:130] ! I0603 12:23:17.444677       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:12.632437   10844 command_runner.go:130] ! I0603 12:23:17.702103       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:12.632478   10844 command_runner.go:130] ! I0603 12:23:17.702517       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:12.632478   10844 command_runner.go:130] ! I0603 12:23:17.702550       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:12.632478   10844 command_runner.go:130] ! I0603 12:23:17.851156       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:12.632516   10844 command_runner.go:130] ! I0603 12:23:17.851357       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:12.632556   10844 command_runner.go:130] ! I0603 12:23:17.851370       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:12.632556   10844 command_runner.go:130] ! I0603 12:23:18.000740       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:12.632556   10844 command_runner.go:130] ! I0603 12:23:18.003147       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:12.632594   10844 command_runner.go:130] ! I0603 12:23:18.003208       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:12.632628   10844 command_runner.go:130] ! I0603 12:23:18.013736       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.632665   10844 command_runner.go:130] ! I0603 12:23:18.042698       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:12.632665   10844 command_runner.go:130] ! I0603 12:23:18.049024       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:12.632700   10844 command_runner.go:130] ! I0603 12:23:18.049393       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:12.632700   10844 command_runner.go:130] ! I0603 12:23:18.049619       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:12.632700   10844 command_runner.go:130] ! I0603 12:23:18.052020       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:12.632737   10844 command_runner.go:130] ! I0603 12:23:18.052116       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:12.632737   10844 command_runner.go:130] ! I0603 12:23:18.058451       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:12.632785   10844 command_runner.go:130] ! I0603 12:23:18.063949       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:12.632785   10844 command_runner.go:130] ! I0603 12:23:18.063997       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:12.632822   10844 command_runner.go:130] ! I0603 12:23:18.064022       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:12.632822   10844 command_runner.go:130] ! I0603 12:23:18.064027       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:12.632822   10844 command_runner.go:130] ! I0603 12:23:18.064033       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:12.632870   10844 command_runner.go:130] ! I0603 12:23:18.076606       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:12.632870   10844 command_runner.go:130] ! I0603 12:23:18.097633       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:12.632870   10844 command_runner.go:130] ! I0603 12:23:18.097738       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:12.632907   10844 command_runner.go:130] ! I0603 12:23:18.098210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:12.632907   10844 command_runner.go:130] ! I0603 12:23:18.098286       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:12.632947   10844 command_runner.go:130] ! I0603 12:23:18.098375       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:12.632947   10844 command_runner.go:130] ! I0603 12:23:18.098877       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:12.633004   10844 command_runner.go:130] ! I0603 12:23:18.100321       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:12.633004   10844 command_runner.go:130] ! I0603 12:23:18.100587       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:12.633039   10844 command_runner.go:130] ! I0603 12:23:18.103320       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:12.633039   10844 command_runner.go:130] ! I0603 12:23:18.103450       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.103468       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.107067       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.108430       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.112806       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.113161       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.114212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400" podCIDRs=["10.244.0.0/24"]
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.114620       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.116662       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.120085       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.129657       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.139133       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.141026       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.152060       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.154508       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.154683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.156204       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.157708       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.159229       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.202824       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.204977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.213840       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.215208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.245546       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.260135       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.303335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.744986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.745263       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.809407       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.424454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="514.197479ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 05:47:12.633629   10844 command_runner.go:130] ! I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 05:47:12.633629   10844 command_runner.go:130] ! I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 05:47:12.633675   10844 command_runner.go:130] ! I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 05:47:12.633675   10844 command_runner.go:130] ! I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 05:47:12.633675   10844 command_runner.go:130] ! I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:12.633746   10844 command_runner.go:130] ! I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:12.633783   10844 command_runner.go:130] ! I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 05:47:12.633820   10844 command_runner.go:130] ! I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:12.633820   10844 command_runner.go:130] ! I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.633860   10844 command_runner.go:130] ! I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 05:47:12.633860   10844 command_runner.go:130] ! I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 05:47:12.633897   10844 command_runner.go:130] ! I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 05:47:12.633940   10844 command_runner.go:130] ! I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 05:47:12.633940   10844 command_runner.go:130] ! I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 05:47:12.633986   10844 command_runner.go:130] ! I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 05:47:12.633986   10844 command_runner.go:130] ! I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 05:47:12.634024   10844 command_runner.go:130] ! I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 05:47:12.634060   10844 command_runner.go:130] ! I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 05:47:12.634060   10844 command_runner.go:130] ! I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634187   10844 command_runner.go:130] ! I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:12.634267   10844 command_runner.go:130] ! I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 05:47:12.634349   10844 command_runner.go:130] ! I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:12.634349   10844 command_runner.go:130] ! I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634349   10844 command_runner.go:130] ! I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634405   10844 command_runner.go:130] ! I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634405   10844 command_runner.go:130] ! I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.653414   10844 logs.go:123] Gathering logs for kindnet [a00a9dc2a937] ...
	I0603 05:47:12.653414   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00a9dc2a937"
	I0603 05:47:12.682333   10844 command_runner.go:130] ! I0603 12:32:18.810917       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:18.811413       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:18.811451       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826592       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826645       1 main.go:227] handling current node
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826658       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826665       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.827203       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.682959   10844 command_runner.go:130] ! I0603 12:32:28.827288       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.682959   10844 command_runner.go:130] ! I0603 12:32:38.840141       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.682959   10844 command_runner.go:130] ! I0603 12:32:38.840209       1 main.go:227] handling current node
	I0603 05:47:12.683007   10844 command_runner.go:130] ! I0603 12:32:38.840223       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:38.840230       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:38.840630       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:38.840646       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850171       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850276       1 main.go:227] handling current node
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850292       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850299       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850729       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850876       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:58.856606       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.683571   10844 command_runner.go:130] ! I0603 12:32:58.857034       1 main.go:227] handling current node
	I0603 05:47:12.683664   10844 command_runner.go:130] ! I0603 12:32:58.857296       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:32:58.857510       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:32:58.858637       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:32:58.858677       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864801       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864826       1 main.go:227] handling current node
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864838       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.865310       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.865474       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872391       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872568       1 main.go:227] handling current node
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872599       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872624       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872804       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872959       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:28.886324       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:28.886350       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886362       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886368       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886918       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886985       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.893626       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.893899       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.893916       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.894181       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.894556       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.894647       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910837       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910878       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910891       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910896       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.911015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.911041       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926167       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926268       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926284       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926291       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.927007       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.927131       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937101       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937131       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937150       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937284       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937292       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:18.943292       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685668   10844 command_runner.go:130] ! I0603 12:34:18.943378       1 main.go:227] handling current node
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943532       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943590       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:28.950687       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685793   10844 command_runner.go:130] ! I0603 12:34:28.950853       1 main.go:227] handling current node
	I0603 05:47:12.685793   10844 command_runner.go:130] ! I0603 12:34:28.950870       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685820   10844 command_runner.go:130] ! I0603 12:34:28.950878       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:28.951068       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:28.951084       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:38.965710       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:38.967355       1 main.go:227] handling current node
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967377       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967388       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967555       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967566       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.975988       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.976117       1 main.go:227] handling current node
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.976134       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.976142       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685982   10844 command_runner.go:130] ! I0603 12:34:48.976817       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685982   10844 command_runner.go:130] ! I0603 12:34:48.976852       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686012   10844 command_runner.go:130] ! I0603 12:34:58.991312       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686012   10844 command_runner.go:130] ! I0603 12:34:58.991846       1 main.go:227] handling current node
	I0603 05:47:12.686012   10844 command_runner.go:130] ! I0603 12:34:58.991984       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:34:58.992011       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:34:58.992262       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:34:58.992331       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999119       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999230       1 main.go:227] handling current node
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999369       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999483       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999604       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999616       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007514       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007620       1 main.go:227] handling current node
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007635       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007642       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007957       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007986       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:29.013983       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686619   10844 command_runner.go:130] ! I0603 12:35:29.014066       1 main.go:227] handling current node
	I0603 05:47:12.686619   10844 command_runner.go:130] ! I0603 12:35:29.014081       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:29.014088       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:29.014429       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:29.014444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025261       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025288       1 main.go:227] handling current node
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025300       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025306       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686763   10844 command_runner.go:130] ! I0603 12:35:39.025682       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686763   10844 command_runner.go:130] ! I0603 12:35:39.025828       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686763   10844 command_runner.go:130] ! I0603 12:35:49.038248       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.039013       1 main.go:227] handling current node
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.039143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.039662       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.040380       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686869   10844 command_runner.go:130] ! I0603 12:35:49.040438       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686869   10844 command_runner.go:130] ! I0603 12:35:59.052205       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686869   10844 command_runner.go:130] ! I0603 12:35:59.052297       1 main.go:227] handling current node
	I0603 05:47:12.686910   10844 command_runner.go:130] ! I0603 12:35:59.052328       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686910   10844 command_runner.go:130] ! I0603 12:35:59.052410       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686910   10844 command_runner.go:130] ! I0603 12:35:59.052577       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686958   10844 command_runner.go:130] ! I0603 12:35:59.052607       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686991   10844 command_runner.go:130] ! I0603 12:36:09.059926       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686991   10844 command_runner.go:130] ! I0603 12:36:09.059974       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.059988       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.059995       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.060515       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.060532       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.069521       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.069928       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.070204       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.070309       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.070978       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.071168       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084376       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084614       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084689       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084804       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.085015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.085100       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098298       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098419       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098435       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098444       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098942       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.099083       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109724       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109872       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109894       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.110382       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.110466       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.116904       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117061       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117150       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117281       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117621       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117713       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.133187       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.133597       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.133807       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.134149       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.134720       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.134902       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:19.141218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:19.141246       1 main.go:227] handling current node
	I0603 05:47:12.687552   10844 command_runner.go:130] ! I0603 12:37:19.141257       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687552   10844 command_runner.go:130] ! I0603 12:37:19.141263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687552   10844 command_runner.go:130] ! I0603 12:37:19.141386       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:19.141456       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:29.151018       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:29.151126       1 main.go:227] handling current node
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:29.151147       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687659   10844 command_runner.go:130] ! I0603 12:37:29.151156       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687659   10844 command_runner.go:130] ! I0603 12:37:29.151810       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687697   10844 command_runner.go:130] ! I0603 12:37:29.152019       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687739   10844 command_runner.go:130] ! I0603 12:37:39.165415       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.165510       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.165524       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.165530       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.166173       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.166270       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181247       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181371       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181387       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181412       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181852       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.182176       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189418       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189528       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189544       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189552       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.190394       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.190480       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197274       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197415       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197432       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197440       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197851       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204632       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204793       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204826       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204835       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.205144       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.205251       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213406       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213503       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213518       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213524       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213644       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:29.213655       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:39.229128       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:39.229187       1 main.go:227] handling current node
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:39.229199       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.688392   10844 command_runner.go:130] ! I0603 12:38:39.229205       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.688392   10844 command_runner.go:130] ! I0603 12:38:39.229332       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.688392   10844 command_runner.go:130] ! I0603 12:38:39.229344       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689540   10844 command_runner.go:130] ! I0603 12:38:49.245014       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689576   10844 command_runner.go:130] ! I0603 12:38:49.245069       1 main.go:227] handling current node
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245084       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245091       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245355       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245382       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689812   10844 command_runner.go:130] ! I0603 12:38:59.252267       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.252359       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.252371       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.252376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.260367       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.260444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270366       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270476       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270490       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270544       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270869       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.271060       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277515       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277615       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277631       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277638       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.278259       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.278516       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287007       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287102       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287117       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287124       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287246       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287329       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293747       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293802       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293812       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.294185       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.294225       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304527       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304629       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304643       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304651       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304863       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.305107       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:59.314751       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690440   10844 command_runner.go:130] ! I0603 12:39:59.314846       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.314860       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.314866       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.314992       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.315004       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321649       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321868       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321895       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.322451       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.322470       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336642       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336845       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336864       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336872       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.337002       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.337011       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350352       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350468       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350484       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350493       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350956       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.351085       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366296       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366357       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366370       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366518       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366548       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371036       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371174       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371189       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371218       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371340       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371368       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:59.386603       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387024       1 main.go:227] handling current node
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387122       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387140       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387625       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:40:59.387909       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401524       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401658       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401746       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.402106       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.402238       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408360       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408404       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408417       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408423       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408530       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408541       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414703       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414865       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414889       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.415393       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.415619       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.415702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426331       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426441       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426455       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426462       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426731       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426795       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436724       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436739       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436745       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.437162       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.437250       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:59.449218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:59.449377       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:59.449393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691709   10844 command_runner.go:130] ! I0603 12:41:59.449400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:41:59.449801       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:41:59.449916       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464583       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464690       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464705       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464713       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.465435       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.465537       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.473928       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474029       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474044       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474052       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474454       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474552       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480280       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480469       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480606       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480686       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.481023       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.481213       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492462       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492634       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492669       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492711       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492930       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.711493   10844 logs.go:123] Gathering logs for kube-proxy [ad08c7b8f3af] ...
	I0603 05:47:12.712455   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad08c7b8f3af"
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 05:47:12.745347   10844 command_runner.go:130] ! I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:12.745347   10844 command_runner.go:130] ! I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:12.745347   10844 command_runner.go:130] ! I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:12.745398   10844 command_runner.go:130] ! I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:12.747653   10844 logs.go:123] Gathering logs for kube-controller-manager [cbaa09a85a64] ...
	I0603 05:47:12.747695   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa09a85a64"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:57.870752       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.526588       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.526702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.533907       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.534542       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.535842       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.536233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.398949       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.399900       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435010       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435076       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435752       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.494257       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.494484       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.501595       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:12.785851   10844 command_runner.go:130] ! E0603 12:46:02.503053       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.503101       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.506314       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.511488       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.511970       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.516592       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.520190       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.521481       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.521500       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.522419       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.522531       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.522539       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.527263       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.527284       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.528477       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.528534       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.528980       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:12.787154   10844 command_runner.go:130] ! I0603 12:46:02.529023       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:12.789296   10844 command_runner.go:130] ! I0603 12:46:02.529029       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:12.789536   10844 command_runner.go:130] ! I0603 12:46:02.532164       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:12.789605   10844 command_runner.go:130] ! I0603 12:46:02.532658       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:12.789605   10844 command_runner.go:130] ! I0603 12:46:02.532787       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:12.789605   10844 command_runner.go:130] ! I0603 12:46:02.537982       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.538156       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.540497       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.545135       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.545508       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.546501       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.548466       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.551407       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.551542       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.552105       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.552249       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.552280       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:12.789830   10844 command_runner.go:130] ! I0603 12:46:02.552956       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:12.789830   10844 command_runner.go:130] ! I0603 12:46:02.564031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:12.789830   10844 command_runner.go:130] ! I0603 12:46:02.564743       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.565277       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.565424       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.571139       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.571233       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.572399       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.572466       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.573181       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.573205       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.574887       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.582200       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:12.790083   10844 command_runner.go:130] ! I0603 12:46:02.582364       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:12.790083   10844 command_runner.go:130] ! I0603 12:46:02.582373       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:12.790083   10844 command_runner.go:130] ! I0603 12:46:02.588602       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.790122   10844 command_runner.go:130] ! I0603 12:46:02.591240       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.790122   10844 command_runner.go:130] ! I0603 12:46:12.612297       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:12.790122   10844 command_runner.go:130] ! I0603 12:46:12.612483       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:12.790208   10844 command_runner.go:130] ! I0603 12:46:12.613381       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:12.790208   10844 command_runner.go:130] ! I0603 12:46:12.623612       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:12.790208   10844 command_runner.go:130] ! I0603 12:46:12.628478       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.628951       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.629235       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.652905       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.652988       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.653246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673155       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673508       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.674494       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.674611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.674812       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675397       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675422       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675675       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676018       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676230       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676428       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676474       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676746       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676879       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676991       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.677057       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.677159       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.677261       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.679809       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.680265       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.680400       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.696376       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:12.790855   10844 command_runner.go:130] ! I0603 12:46:12.697035       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:12.790855   10844 command_runner.go:130] ! I0603 12:46:12.697121       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:12.790855   10844 command_runner.go:130] ! I0603 12:46:12.699870       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.700035       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.700365       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.707376       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.708196       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:12.791022   10844 command_runner.go:130] ! I0603 12:46:12.708250       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:12.791022   10844 command_runner.go:130] ! I0603 12:46:12.715601       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:12.791022   10844 command_runner.go:130] ! I0603 12:46:12.716125       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.716429       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.725280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.725365       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.726123       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.734528       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.734935       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.735117       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.737491       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.737773       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.737858       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743270       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743591       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743640       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743648       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748185       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748266       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748498       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748532       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748553       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749140       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749625       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749663       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749897       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.750105       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.750568       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.753301       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.753662       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.753804       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.754382       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.754576       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.757083       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.757524       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.758174       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.760247       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.760686       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.760938       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.772698       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.772922       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.774148       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:12.791813   10844 command_runner.go:130] ! E0603 12:46:12.775996       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.776034       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.779294       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.779452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.780268       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783043       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783634       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783847       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783962       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.792655       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.801373       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.817303       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.821609       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.829238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.792050   10844 command_runner.go:130] ! I0603 12:46:12.832397       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:12.792050   10844 command_runner.go:130] ! I0603 12:46:12.832809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.792193   10844 command_runner.go:130] ! I0603 12:46:12.833093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:12.792193   10844 command_runner.go:130] ! I0603 12:46:12.833264       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:12.792193   10844 command_runner.go:130] ! I0603 12:46:12.833561       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.833878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.835226       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.840542       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.846790       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.849319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.849497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.851129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.851147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.852109       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.854406       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.854923       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.867259       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.873525       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.874696       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.876061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.880612       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.880650       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.884270       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.896673       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.897786       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.909588       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.922202       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.923485       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.923685       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924158       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924516       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924851       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924952       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.928113       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.929667       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.959523       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:12.793168   10844 command_runner.go:130] ! I0603 12:46:12.963250       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:12.793202   10844 command_runner.go:130] ! I0603 12:46:13.029808       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:12.793202   10844 command_runner.go:130] ! I0603 12:46:13.030293       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:12.793243   10844 command_runner.go:130] ! I0603 12:46:13.038277       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:12.793243   10844 command_runner.go:130] ! I0603 12:46:13.044424       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:12.793482   10844 command_runner.go:130] ! I0603 12:46:13.064118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:12.793482   10844 command_runner.go:130] ! I0603 12:46:13.064519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:12.793482   10844 command_runner.go:130] ! I0603 12:46:13.064657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.064984       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.077763       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.083477       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.093778       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.100897       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:12.793632   10844 command_runner.go:130] ! I0603 12:46:13.133780       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:12.793632   10844 command_runner.go:130] ! I0603 12:46:13.164944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.004317ms"
	I0603 05:47:12.793632   10844 command_runner.go:130] ! I0603 12:46:13.168328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.004µs"
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.172600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="212.304157ms"
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.173022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.001µs"
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.502035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.535943       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 05:47:12.793858   10844 command_runner.go:130] ! I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 05:47:12.793858   10844 command_runner.go:130] ! I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 05:47:12.793858   10844 command_runner.go:130] ! I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 05:47:12.793933   10844 command_runner.go:130] ! I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 05:47:12.814157   10844 logs.go:123] Gathering logs for kube-scheduler [f39be6db7a1f] ...
	I0603 05:47:12.814157   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f39be6db7a1f"
	I0603 05:47:12.843221   10844 command_runner.go:130] ! I0603 12:22:59.604855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.885974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.886217       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.886249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.886344       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.957357       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.957471       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962196       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962588       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.975786       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.976030       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.976627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.976720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.977093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.977211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.977871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.978108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843852   10844 command_runner.go:130] ! W0603 12:23:00.978352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.843852   10844 command_runner.go:130] ! E0603 12:23:00.978554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.843852   10844 command_runner.go:130] ! W0603 12:23:00.978915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.843955   10844 command_runner.go:130] ! E0603 12:23:00.979166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.843955   10844 command_runner.go:130] ! W0603 12:23:00.979907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844150   10844 command_runner.go:130] ! E0603 12:23:00.980156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844221   10844 command_runner.go:130] ! W0603 12:23:00.980358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844302   10844 command_runner.go:130] ! E0603 12:23:00.980393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.980479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.980561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.980991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.981244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.981380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.981529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.981800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.981883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.981956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.982200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.982090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.982650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.982102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.982927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.795531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:01.795655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.838399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:01.838478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.861969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:01.862351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844909   10844 command_runner.go:130] ! E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844909   10844 command_runner.go:130] ! W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.844909   10844 command_runner.go:130] ! E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.845012   10844 command_runner.go:130] ! W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845080   10844 command_runner.go:130] ! E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845131   10844 command_runner.go:130] ! W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.845160   10844 command_runner.go:130] ! E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.845160   10844 command_runner.go:130] ! W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845160   10844 command_runner.go:130] ! E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845241   10844 command_runner.go:130] ! W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845317   10844 command_runner.go:130] ! E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845317   10844 command_runner.go:130] ! W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.845317   10844 command_runner.go:130] ! E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.845394   10844 command_runner.go:130] ! W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.845469   10844 command_runner.go:130] ! E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.845469   10844 command_runner.go:130] ! W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845547   10844 command_runner.go:130] ! E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845587   10844 command_runner.go:130] ! W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.845602   10844 command_runner.go:130] ! E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.845602   10844 command_runner.go:130] ! W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:12.845766   10844 command_runner.go:130] ! E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 05:47:12.859170   10844 logs.go:123] Gathering logs for kindnet [3a08a76e2a79] ...
	I0603 05:47:12.859170   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a08a76e2a79"
	I0603 05:47:12.886501   10844 command_runner.go:130] ! I0603 12:46:03.050827       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 05:47:12.886501   10844 command_runner.go:130] ! I0603 12:46:03.051229       1 main.go:107] hostIP = 172.17.95.88
	I0603 05:47:12.886501   10844 command_runner.go:130] ! podIP = 172.17.95.88
	I0603 05:47:12.887520   10844 command_runner.go:130] ! I0603 12:46:03.051377       1 main.go:116] setting mtu 1500 for CNI 
	I0603 05:47:12.887520   10844 command_runner.go:130] ! I0603 12:46:03.051397       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 05:47:12.887557   10844 command_runner.go:130] ! I0603 12:46:03.051417       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 05:47:12.887583   10844 command_runner.go:130] ! I0603 12:46:33.483366       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 05:47:12.887583   10844 command_runner.go:130] ! I0603 12:46:33.505262       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.505362       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506144       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.94.201 Flags: [] Table: 0} 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506651       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506661       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506765       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512187       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512270       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512283       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512290       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512906       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512944       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529047       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529290       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529365       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529466       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529947       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.530023       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545370       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545467       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545481       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545487       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545994       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.546064       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.890880   10844 logs.go:123] Gathering logs for dmesg ...
	I0603 05:47:12.891412   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 05:47:12.916815   10844 command_runner.go:130] > [Jun 3 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.129332] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.024453] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.058085] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.021687] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 05:47:12.916944   10844 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 05:47:12.916944   10844 command_runner.go:130] > [Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	I0603 05:47:12.919126   10844 logs.go:123] Gathering logs for kube-apiserver [a9b10f4d479a] ...
	I0603 05:47:12.919126   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9b10f4d479a"
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:57.403757       1 options.go:221] external host was not specified, using 172.17.95.88
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:57.406924       1 server.go:148] Version: v1.30.1
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:57.407254       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.053920       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.058845       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.058955       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.059338       1 instance.go:299] Using reconciler: lease
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.060201       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.875148       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:58.875563       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.142148       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.142832       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.377455       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.573170       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.586634       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.586771       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.586784       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.588425       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.588531       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.590497       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.591820       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.591914       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.591924       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.594253       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.594382       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.595963       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.596105       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.596117       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.597347       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.597459       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.597610       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958195   10844 command_runner.go:130] ! I0603 12:45:59.598635       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 05:47:12.958195   10844 command_runner.go:130] ! I0603 12:45:59.601013       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 05:47:12.958195   10844 command_runner.go:130] ! W0603 12:45:59.601125       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958195   10844 command_runner.go:130] ! W0603 12:45:59.601136       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958195   10844 command_runner.go:130] ! I0603 12:45:59.601685       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 05:47:12.958195   10844 command_runner.go:130] ! W0603 12:45:59.601835       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958304   10844 command_runner.go:130] ! W0603 12:45:59.601851       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958304   10844 command_runner.go:130] ! I0603 12:45:59.602906       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 05:47:12.958304   10844 command_runner.go:130] ! W0603 12:45:59.603027       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 05:47:12.958356   10844 command_runner.go:130] ! I0603 12:45:59.605451       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 05:47:12.958356   10844 command_runner.go:130] ! W0603 12:45:59.605590       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958356   10844 command_runner.go:130] ! W0603 12:45:59.605603       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! I0603 12:45:59.606823       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.607057       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.607073       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! I0603 12:45:59.610997       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.611141       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.611153       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958493   10844 command_runner.go:130] ! I0603 12:45:59.615262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 05:47:12.958493   10844 command_runner.go:130] ! I0603 12:45:59.618444       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 05:47:12.958493   10844 command_runner.go:130] ! W0603 12:45:59.618592       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 05:47:12.958493   10844 command_runner.go:130] ! W0603 12:45:59.618802       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958493   10844 command_runner.go:130] ! I0603 12:45:59.633959       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 05:47:12.958568   10844 command_runner.go:130] ! W0603 12:45:59.634179       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 05:47:12.958568   10844 command_runner.go:130] ! W0603 12:45:59.634387       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 05:47:12.958568   10844 command_runner.go:130] ! I0603 12:45:59.641016       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 05:47:12.958568   10844 command_runner.go:130] ! W0603 12:45:59.641203       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958642   10844 command_runner.go:130] ! W0603 12:45:59.641390       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958642   10844 command_runner.go:130] ! I0603 12:45:59.643262       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 05:47:12.958642   10844 command_runner.go:130] ! W0603 12:45:59.643611       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958642   10844 command_runner.go:130] ! I0603 12:45:59.665282       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 05:47:12.958715   10844 command_runner.go:130] ! W0603 12:45:59.665339       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958715   10844 command_runner.go:130] ! I0603 12:46:00.321072       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 05:47:12.958715   10844 command_runner.go:130] ! I0603 12:46:00.321338       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 05:47:12.958715   10844 command_runner.go:130] ! I0603 12:46:00.321510       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.958796   10844 command_runner.go:130] ! I0603 12:46:00.321684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.958796   10844 command_runner.go:130] ! I0603 12:46:00.322441       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 05:47:12.958842   10844 command_runner.go:130] ! I0603 12:46:00.324839       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 05:47:12.958842   10844 command_runner.go:130] ! I0603 12:46:00.324963       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.325383       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331772       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331819       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331950       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331975       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.331996       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332390       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332464       1 controller.go:139] Starting OpenAPI controller
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332488       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332501       1 naming_controller.go:291] Starting NamingConditionController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332512       1 establishing_controller.go:76] Starting EstablishingController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332528       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332550       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 05:47:12.959137   10844 command_runner.go:130] ! I0603 12:46:00.321340       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.959137   10844 command_runner.go:130] ! I0603 12:46:00.325911       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 05:47:12.959165   10844 command_runner.go:130] ! I0603 12:46:00.348350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.348672       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.325922       1 available_controller.go:423] Starting AvailableConditionController
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.350192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.325939       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.325949       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.368845       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.368878       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.451943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 05:47:12.959193   10844 command_runner.go:130] ! W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	I0603 05:47:12.968828   10844 logs.go:123] Gathering logs for etcd [ef3c01484867] ...
	I0603 05:47:12.968828   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3c01484867"
	I0603 05:47:12.998550   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.861568Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.863054Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.95.88:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.95.88:2380","--initial-cluster=multinode-316400=https://172.17.95.88:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.95.88:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.95.88:2380","--name=multinode-316400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.86357Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.864546Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.866457Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.95.88:2380"]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.867148Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.884169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.885995Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-316400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.912835Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"25.475134ms"}
	I0603 05:47:12.999173   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.947133Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.990656Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","commit-index":1995}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=()"}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became follower at term 2"}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2227694153984668 [peers: [], term: 2, commit: 1995, applied: 0, lastindex: 1995, lastterm: 2]"}
	I0603 05:47:12.999350   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:57.005826Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 05:47:12.999350   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.01104Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0603 05:47:12.999389   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.018364Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0603 05:47:12.999477   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.030883Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 05:47:12.999527   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042399Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2227694153984668","timeout":"7s"}
	I0603 05:47:12.999527   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042946Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2227694153984668"}
	I0603 05:47:12.999564   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.043072Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2227694153984668","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 05:47:12.999564   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.046821Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 05:47:12.999644   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047797Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 05:47:12.999644   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 05:47:12.999688   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 05:47:12.999688   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	I0603 05:47:13.007028   10844 logs.go:123] Gathering logs for coredns [8280b3904678] ...
	I0603 05:47:13.007176   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8280b3904678"
	I0603 05:47:13.040482   10844 command_runner.go:130] > .:53
	I0603 05:47:13.040561   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:13.040612   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:13.040612   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:13.040612   10844 command_runner.go:130] > [INFO] 127.0.0.1:42160 - 49231 "HINFO IN 7758649785632377755.6167658315586765337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046714522s
	I0603 05:47:13.040663   10844 command_runner.go:130] > [INFO] 10.244.1.2:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279598s
	I0603 05:47:13.040663   10844 command_runner.go:130] > [INFO] 10.244.1.2:58454 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208411566s
	I0603 05:47:13.040696   10844 command_runner.go:130] > [INFO] 10.244.1.2:41741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.13626297s
	I0603 05:47:13.040696   10844 command_runner.go:130] > [INFO] 10.244.1.2:34878 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.105138942s
	I0603 05:47:13.040740   10844 command_runner.go:130] > [INFO] 10.244.0.3:55537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268797s
	I0603 05:47:13.040740   10844 command_runner.go:130] > [INFO] 10.244.0.3:46426 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000881s
	I0603 05:47:13.040773   10844 command_runner.go:130] > [INFO] 10.244.0.3:52879 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174998s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:43420 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000100699s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:58392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115599s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:44885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024455563s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:42255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000337996s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:41386 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245097s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:55181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012426179s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:35256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164099s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:57960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110199s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 05:47:13.044558   10844 logs.go:123] Gathering logs for kubelet ...
	I0603 05:47:13.044589   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 05:47:13.077930   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825136    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825207    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.826137    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: E0603 12:45:50.827240    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552269    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552416    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552941    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: E0603 12:45:51.553003    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711442    1519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711544    1519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711817    1519 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.716147    1519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.748912    1519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.771826    1519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.772049    1519 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773407    1519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773557    1519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-316400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774457    1519 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774557    1519 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.775200    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778084    1519 kubelet.go:400] "Attempting to sync node with API server"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778299    1519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778455    1519 kubelet.go:312] "Adding apiserver pod source"
	I0603 05:47:13.078742   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.782054    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.078813   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.782432    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.078813   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.785611    1519 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 05:47:13.078882   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.790640    1519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 05:47:13.078909   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.793090    1519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 05:47:13.078937   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.794605    1519 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 05:47:13.078969   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.796156    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.078969   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.796271    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.079021   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.797002    1519 server.go:1264] "Started kubelet"
	I0603 05:47:13.079071   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.798266    1519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 05:47:13.079071   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.801861    1519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 05:47:13.079140   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.802334    1519 server.go:455] "Adding debug handlers to kubelet server"
	I0603 05:47:13.079217   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.803283    1519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 05:47:13.079297   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.803500    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:13.079328   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.818343    1519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 05:47:13.079328   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.844408    1519 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 05:47:13.079364   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.846586    1519 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 05:47:13.079408   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859495    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="200ms"
	I0603 05:47:13.079430   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.859675    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.079474   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859801    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860191    1519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860329    1519 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860344    1519 factory.go:221] Registration of the systemd container factory successfully
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898244    1519 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898480    1519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898596    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899321    1519 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899417    1519 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899447    1519 policy_none.go:49] "None policy: Start"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.900544    1519 reconciler.go:26] "Reconciler: start to sync state"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907485    1519 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907527    1519 state_mem.go:35] "Initializing new in-memory state store"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.908237    1519 state_mem.go:75] "Updated machine memory state"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.913835    1519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914035    1519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914854    1519 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.921784    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.927630    1519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-316400\" not found"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932254    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932281    1519 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932300    1519 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.935092    1519 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0603 05:47:13.080077   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.940949    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.080077   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.941116    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.080162   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.948643    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.080162   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.949875    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.080213   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.957193    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:13.080213   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:13.080286   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:13.080286   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:13.080286   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:13.080320   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.035350    1519 topology_manager.go:215] "Topology Admit Handler" podUID="29e4294fa112526de08d5737962f6330" podNamespace="kube-system" podName="kube-apiserver-multinode-316400"
	I0603 05:47:13.080371   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.036439    1519 topology_manager.go:215] "Topology Admit Handler" podUID="53c1415900cfae2b2544e26360f8c9e2" podNamespace="kube-system" podName="kube-controller-manager-multinode-316400"
	I0603 05:47:13.080423   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.037279    1519 topology_manager.go:215] "Topology Admit Handler" podUID="392dbbcc275890dd2b6fadbfc5aaee27" podNamespace="kube-system" podName="kube-scheduler-multinode-316400"
	I0603 05:47:13.080445   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.040156    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a77247d80dfdd462b8863b85ab8ad4bb" podNamespace="kube-system" podName="etcd-multinode-316400"
	I0603 05:47:13.080445   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041355    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf22fe66615444841b76ea00858c2d191b3808baedd9bc080bc40a07e173120c"
	I0603 05:47:13.080492   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041413    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b8b906c7ece4b6d777a07a0cb2203eff03efdfae414479586ee928dfd93a0f"
	I0603 05:47:13.080530   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041426    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8fbb688dfe331c1f384bb60f2e3169f09a613ebbfb33a15f502f1d3e605b1"
	I0603 05:47:13.080530   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041486    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f0d5d979f878809d344310dbe1eff0bad9db5a6522da02c87fecce5e5aeee0"
	I0603 05:47:13.080572   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.047918    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.063032    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="400ms"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.063221    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24225992b633386b5c5d178b106212b6c942a19a6f436ce076aaa359c121477"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.079235    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.093321    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.105962    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106038    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-certs\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106081    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-ca-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106112    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-ca-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106140    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-k8s-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106216    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392dbbcc275890dd2b6fadbfc5aaee27-kubeconfig\") pod \"kube-scheduler-multinode-316400\" (UID: \"392dbbcc275890dd2b6fadbfc5aaee27\") " pod="kube-system/kube-scheduler-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106252    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-data\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106274    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-k8s-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106301    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106335    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-flexvolume-dir\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.081128   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-kubeconfig\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.081174   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.108700    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f366fa802e02ad1c75f843781b4cf6b39c2e71e08ec4fb65114ebe9cbf4901"
	I0603 05:47:13.081230   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.152637    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081270   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.154286    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.081304   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.473402    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="800ms"
	I0603 05:47:13.081304   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.556260    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081344   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.558340    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.081344   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.691400    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081344   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.691528    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.943127    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.943173    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.142169    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.150065    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.247409    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.247587    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.250356    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.250413    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.274392    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="1.6s"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.360120    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.361915    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.861220    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:57 multinode-316400 kubelet[1519]: I0603 12:45:57.964214    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604617    1519 kubelet_node_status.go:112] "Node was previously registered" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604775    1519 kubelet_node_status.go:76] "Successfully registered node" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.606910    1519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.607771    1519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.608805    1519 setters.go:580] "Node became not ready" node="multinode-316400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T12:46:00Z","lastTransitionTime":"2024-06-03T12:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.691329    1519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-316400\" already exists" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.791033    1519 apiserver.go:52] "Watching apiserver"
	I0603 05:47:13.081986   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798319    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a3523f27-9775-4c1f-812f-a667faa1bace" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4hrc6"
	I0603 05:47:13.082104   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798930    1519 topology_manager.go:215] "Topology Admit Handler" podUID="6815ff24-537b-42f3-b8ee-4c3e13be89f7" podNamespace="kube-system" podName="kindnet-4hpsl"
	I0603 05:47:13.082166   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800209    1519 topology_manager.go:215] "Topology Admit Handler" podUID="60c8f253-7e07-4f56-b1f2-e0032ac6a8ce" podNamespace="kube-system" podName="kube-proxy-ks64x"
	I0603 05:47:13.082210   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800471    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa" podNamespace="kube-system" podName="storage-provisioner"
	I0603 05:47:13.082250   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800813    1519 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	I0603 05:47:13.082285   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.801153    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.082285   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.801692    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-316400" podUID="5a3b396d-1240-4c67-b2f5-e5664e068bfe"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.802378    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.833818    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-316400"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.848055    1519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.920366    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-cni-cfg\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923685    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-lib-modules\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923879    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-xtables-lock\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924084    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-xtables-lock\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924331    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbd73e44-9a7e-4b5f-93e5-d1621c837baa-tmp\") pod \"storage-provisioner\" (UID: \"bbd73e44-9a7e-4b5f-93e5-d1621c837baa\") " pod="kube-system/storage-provisioner"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924536    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-lib-modules\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.924884    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.925133    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.425053064 +0000 UTC m=+6.818668510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.947864    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171c5f025e4267e9949ddac2f1863980" path="/var/lib/kubelet/pods/171c5f025e4267e9949ddac2f1863980/volumes"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.949521    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79ce6c8ebbce53597babbe73b1962c9" path="/var/lib/kubelet/pods/b79ce6c8ebbce53597babbe73b1962c9/volumes"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.959965    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960012    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083014   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960141    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.460099085 +0000 UTC m=+6.853714631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083124   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.984966    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-316400" podStartSLOduration=0.984946212 podStartE2EDuration="984.946212ms" podCreationTimestamp="2024-06-03 12:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:00.911653941 +0000 UTC m=+6.305269487" watchObservedRunningTime="2024-06-03 12:46:00.984946212 +0000 UTC m=+6.378561658"
	I0603 05:47:13.083124   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430112    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.083215   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430199    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.430180493 +0000 UTC m=+7.823795939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.083254   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532174    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083254   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532233    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532300    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.532282929 +0000 UTC m=+7.925898375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: I0603 12:46:01.863329    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.165874    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.352473    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.353470    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-316400" podUID="0cdcee20-9dca-4eca-b92f-a7214368dd5e"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.376913    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442116    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442214    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.442196268 +0000 UTC m=+9.835811814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543119    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543210    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543279    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.543260694 +0000 UTC m=+9.936876140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935003    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935334    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:03 multinode-316400 kubelet[1519]: I0603 12:46:03.466467    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-316400" podStartSLOduration=1.4664454550000001 podStartE2EDuration="1.466445455s" podCreationTimestamp="2024-06-03 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:03.412988665 +0000 UTC m=+8.806604211" watchObservedRunningTime="2024-06-03 12:46:03.466445455 +0000 UTC m=+8.860061001"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461035    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461144    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.461126571 +0000 UTC m=+13.854742017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562140    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562216    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083926   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562368    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.562318298 +0000 UTC m=+13.955933744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083972   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.917749    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.083972   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935276    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084093   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935939    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084133   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084185   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935856    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084225   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497589    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.084262   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497705    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.497687292 +0000 UTC m=+21.891302738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.084301   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599269    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.084335   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599402    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.084408   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599472    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.599454365 +0000 UTC m=+21.993069911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.084446   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933000    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084480   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933553    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084553   10844 command_runner.go:130] > Jun 03 12:46:09 multinode-316400 kubelet[1519]: E0603 12:46:09.919522    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.084553   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.933394    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084553   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.934072    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084737   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.933530    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084840   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.934829    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084840   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.920634    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.084892   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.933278    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084968   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.934086    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.577469    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.578411    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.578339881 +0000 UTC m=+37.971955427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.677992    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678127    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678205    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.678184952 +0000 UTC m=+38.071800498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933065    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933791    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.934362    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.935128    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:19 multinode-316400 kubelet[1519]: E0603 12:46:19.922666    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.934372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.935099    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934047    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934767    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.924197    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.933388    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085601   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.934120    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085682   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.934350    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085682   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.935369    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085784   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.934504    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085824   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.935634    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085883   10844 command_runner.go:130] > Jun 03 12:46:29 multinode-316400 kubelet[1519]: E0603 12:46:29.925755    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.085883   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.933950    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.937812    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624555    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624639    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.624619316 +0000 UTC m=+70.018234762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726444    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726516    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726576    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.726559662 +0000 UTC m=+70.120175108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.933519    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.934365    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.841289    1519 scope.go:117] "RemoveContainer" containerID="f3d3a474bbe63a5e0e163d5c7d92c13e3e09cac96cc090c7077e648e1f08c5c7"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.842261    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: E0603 12:46:33.842518    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbd73e44-9a7e-4b5f-93e5-d1621c837baa)\"" pod="kube-system/storage-provisioner" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:13.086513   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:13.086513   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:13.086513   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	I0603 05:47:13.086579   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	I0603 05:47:13.086579   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	I0603 05:47:13.140022   10844 logs.go:123] Gathering logs for describe nodes ...
	I0603 05:47:13.140022   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 05:47:13.374314   10844 command_runner.go:130] > Name:               multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] > Roles:              control-plane
	I0603 05:47:13.374314   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 05:47:13.374314   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:13.374314   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	I0603 05:47:13.374314   10844 command_runner.go:130] > Taints:             <none>
	I0603 05:47:13.374314   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:13.374314   10844 command_runner.go:130] > Lease:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:13.374314   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:47:12 +0000
	I0603 05:47:13.374314   10844 command_runner.go:130] > Conditions:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 05:47:13.374314   10844 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 05:47:13.374314   10844 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 05:47:13.374314   10844 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	I0603 05:47:13.374314   10844 command_runner.go:130] > Addresses:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   InternalIP:  172.17.95.88
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Hostname:    multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] > Capacity:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.374314   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.374314   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.374314   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.374314   10844 command_runner.go:130] > System Info:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Machine ID:                 babca97119de4d6fa999cc452dbf962d
	I0603 05:47:13.374314   10844 command_runner.go:130] >   System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:13.374314   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:13.374314   10844 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 05:47:13.374314   10844 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 05:47:13.374314   10844 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 05:47:13.374314   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-pm79t                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 etcd-multinode-316400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         73s
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 kindnet-4hpsl                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-316400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-316400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0603 05:47:13.375275   10844 command_runner.go:130] >   kube-system                 kube-proxy-ks64x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0603 05:47:13.375275   10844 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-316400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0603 05:47:13.375275   10844 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0603 05:47:13.375275   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:13.375275   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   Resource           Requests     Limits
	I0603 05:47:13.375275   10844 command_runner.go:130] >   --------           --------     ------
	I0603 05:47:13.375275   10844 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0603 05:47:13.375275   10844 command_runner.go:130] > Events:
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:13.375412   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.375521   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:13.375521   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.375521   10844 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 05:47:13.375567   10844 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:13.375593   10844 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-316400 status is now: NodeReady
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:13.375682   10844 command_runner.go:130] > Name:               multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:13.375682   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:13.375682   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:13.375682   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	I0603 05:47:13.375682   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:13.375682   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:13.375682   10844 command_runner.go:130] > Lease:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:13.375682   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:47 +0000
	I0603 05:47:13.375682   10844 command_runner.go:130] > Conditions:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:13.375682   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:13.375682   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] > Addresses:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   InternalIP:  172.17.94.201
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Hostname:    multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] > Capacity:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.375682   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.375682   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.375682   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.375682   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.375682   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.375682   10844 command_runner.go:130] > System Info:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	I0603 05:47:13.375682   10844 command_runner.go:130] >   System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:13.375682   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:13.376257   10844 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 05:47:13.376257   10844 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 05:47:13.376257   10844 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:13.376257   10844 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 05:47:13.376447   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-hmxqp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:13.376447   10844 command_runner.go:130] >   kube-system                 kindnet-789v5              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0603 05:47:13.376447   10844 command_runner.go:130] >   kube-system                 kube-proxy-z26hc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:13.376447   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:13.376447   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:13.376447   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:13.376447   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:13.376447   10844 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 05:47:13.376545   10844 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 05:47:13.376545   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 05:47:13.376545   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 05:47:13.376545   10844 command_runner.go:130] > Events:
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:13.376545   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:13.376743   10844 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-316400-m02 status is now: NodeNotReady
	I0603 05:47:13.376743   10844 command_runner.go:130] > Name:               multinode-316400-m03
	I0603 05:47:13.376743   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:13.376774   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m03
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:13.376928   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:13.376928   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	I0603 05:47:13.376971   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:13.376971   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:13.376971   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:13.377012   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:13.377012   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	I0603 05:47:13.377012   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:13.377050   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:13.377050   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:13.377050   10844 command_runner.go:130] > Lease:
	I0603 05:47:13.377050   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m03
	I0603 05:47:13.377050   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:13.377116   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	I0603 05:47:13.377116   10844 command_runner.go:130] > Conditions:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:13.377116   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:13.377116   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] > Addresses:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   InternalIP:  172.17.87.60
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Hostname:    multinode-316400-m03
	I0603 05:47:13.377116   10844 command_runner.go:130] > Capacity:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.377116   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.377116   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.377116   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.377116   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.377116   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.377116   10844 command_runner.go:130] > System Info:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	I0603 05:47:13.377116   10844 command_runner.go:130] >   System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:13.377116   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:13.377716   10844 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 05:47:13.377716   10844 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 05:47:13.377716   10844 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 05:47:13.377716   10844 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:13.377716   10844 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 05:47:13.377716   10844 command_runner.go:130] >   kube-system                 kindnet-2g66r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0603 05:47:13.377716   10844 command_runner.go:130] >   kube-system                 kube-proxy-dl97g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0603 05:47:13.377716   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:13.377904   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:13.377904   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:13.377904   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:13.377904   10844 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 05:47:13.377904   10844 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 05:47:13.377904   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 05:47:13.377987   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 05:47:13.377987   10844 command_runner.go:130] > Events:
	I0603 05:47:13.377987   10844 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 05:47:13.378062   10844 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 05:47:13.378062   10844 command_runner.go:130] >   Normal  Starting                 5m42s                  kube-proxy       
	I0603 05:47:13.378062   10844 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 05:47:13.378169   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.378239   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:13.378239   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.378332   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:13.378332   10844 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:13.378332   10844 command_runner.go:130] >   Normal  Starting                 5m46s                  kubelet          Starting kubelet.
	I0603 05:47:13.378421   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:13.378421   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.378497   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:13.378497   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.378497   10844 command_runner.go:130] >   Normal  RegisteredNode           5m45s                  node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:13.378570   10844 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:13.378570   10844 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	I0603 05:47:13.378642   10844 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:13.390046   10844 logs.go:123] Gathering logs for kube-scheduler [334bb0174b55] ...
	I0603 05:47:13.390046   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 334bb0174b55"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:15.927611   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:47:15.934787   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 200:
	ok
	I0603 05:47:15.935469   10844 round_trippers.go:463] GET https://172.17.95.88:8443/version
	I0603 05:47:15.935572   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:15.935572   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:15.935643   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:15.937252   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:47:15.937252   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:15.937252   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Content-Length: 263
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:15 GMT
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Audit-Id: 13a9976f-4eba-4aa5-b8ce-cd9a75caa81d
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:15.937252   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:15.937252   10844 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 05:47:15.937252   10844 api_server.go:141] control plane version: v1.30.1
	I0603 05:47:15.937252   10844 api_server.go:131] duration metric: took 3.814714s to wait for apiserver health ...
	I0603 05:47:15.937252   10844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 05:47:15.946206   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 05:47:15.986773   10844 command_runner.go:130] > a9b10f4d479a
	I0603 05:47:15.987010   10844 logs.go:276] 1 containers: [a9b10f4d479a]
	I0603 05:47:15.997273   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 05:47:16.022845   10844 command_runner.go:130] > ef3c01484867
	I0603 05:47:16.022845   10844 logs.go:276] 1 containers: [ef3c01484867]
	I0603 05:47:16.031673   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 05:47:16.057686   10844 command_runner.go:130] > 4241e2ff2dfe
	I0603 05:47:16.057716   10844 command_runner.go:130] > 8280b3904678
	I0603 05:47:16.057716   10844 logs.go:276] 2 containers: [4241e2ff2dfe 8280b3904678]
	I0603 05:47:16.066444   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 05:47:16.088421   10844 command_runner.go:130] > 334bb0174b55
	I0603 05:47:16.088421   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:47:16.089495   10844 logs.go:276] 2 containers: [334bb0174b55 f39be6db7a1f]
	I0603 05:47:16.098269   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 05:47:16.120854   10844 command_runner.go:130] > 09616a16042d
	I0603 05:47:16.120854   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:47:16.120962   10844 logs.go:276] 2 containers: [09616a16042d ad08c7b8f3af]
	I0603 05:47:16.131692   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 05:47:16.155491   10844 command_runner.go:130] > cbaa09a85a64
	I0603 05:47:16.155491   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:47:16.155491   10844 logs.go:276] 2 containers: [cbaa09a85a64 3d7dc29a5791]
	I0603 05:47:16.166122   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 05:47:16.186307   10844 command_runner.go:130] > 3a08a76e2a79
	I0603 05:47:16.186307   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:47:16.186307   10844 logs.go:276] 2 containers: [3a08a76e2a79 a00a9dc2a937]
	I0603 05:47:16.186844   10844 logs.go:123] Gathering logs for Docker ...
	I0603 05:47:16.186844   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 05:47:16.219860   10844 command_runner.go:130] > Jun 03 12:44:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.220929   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.220974   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.220974   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221405   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:16.221405   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.771561443Z" level=info msg="Starting up"
	I0603 05:47:16.221467   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.772532063Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:16.221467   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.773624286Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.808811320Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832632417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832678118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832736520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832759220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221610   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833244930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221610   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833408234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221692   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833576137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221692   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833613138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833628938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833638438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.834164449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.835025267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221823   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838417938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221823   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838538341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221900   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838679444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838769945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839497061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839606563Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839624563Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845634889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845777492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845800892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845816092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:16.222136   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845839393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845906994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846346204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846529007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846620809Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846640810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846654910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846667810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846680811Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846694511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846708411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846721811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846733912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846744912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846773112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222472   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846788913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222472   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846800513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846828814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846839914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846851514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846862614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846874615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846886615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846899615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846955316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846981817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846994617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847010117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847031418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847043818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847054818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847167021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:16.222833   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847253922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:16.222833   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847272023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847284523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847328424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847344024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847358325Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847619130Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847749533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847791734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847827434Z" level=info msg="containerd successfully booted in 0.041960s"
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:18 multinode-316400 dockerd[653]: time="2024-06-03T12:45:18.826654226Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.061854651Z" level=info msg="Loading containers: start."
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.457966557Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.535734595Z" level=info msg="Loading containers: done."
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.564526187Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.565436112Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624671041Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624909048Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.830891929Z" level=info msg="Processing signal 'terminated'"
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 systemd[1]: Stopping Docker Application Container Engine...
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.834353661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835003667Z" level=info msg="Daemon shutdown complete"
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835050568Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835251069Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: docker.service: Deactivated successfully.
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Stopped Docker Application Container Engine.
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.915575270Z" level=info msg="Starting up"
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.916682280Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.918008093Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1054
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.949666883Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972231590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972400191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972438091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972452692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972476692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972488892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972615793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972703794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972759294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972796595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972955396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975272817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975362818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975484219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975568720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975596620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975613521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975624221Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975878823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976092925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976118125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976134225Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976151125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976547129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976675630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976808532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976873932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976891332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224272   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976903432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976914332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976926833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976940833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976953033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976964333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976974233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977000233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977014733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977026033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977037834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977048934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977060334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977071734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977082834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977094934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977108234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977131234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977142235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977155935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977174635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977186435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977200035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977321036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977450137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977475038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:16.224879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977491338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:16.224879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977502538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977515638Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977525838Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977793041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977944442Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977993342Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.978082843Z" level=info msg="containerd successfully booted in 0.029905s"
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.958072125Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.992700342Z" level=info msg="Loading containers: start."
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.284992921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.371138910Z" level=info msg="Loading containers: done."
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397139049Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397280650Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.446056397Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.451246244Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Loaded network plugin cni"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4hrc6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e\""
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-pm79t_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4\""
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729841851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729937752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.730811260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225636   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.732365774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831787585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831902586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831956587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.832202689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912447024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912562925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912807128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31bce861be7b718722ced8a5abaaaf80e01691edf1873a82a8467609ec04d725/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948298553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948519555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948541855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948688056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225993   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5938c827a45b5720a54e096dfe79ff973a8724c39f2dfa24cf2bc4e1f3a14c6e/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226022   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226022   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226022   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211361864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211466465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211486965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211585266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.402470615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403083421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403253922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.410900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474478075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.475925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486666687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486786488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226418   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486800688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226418   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.487211092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226447   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 05:47:16.226447   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566084538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226447   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566367341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566479442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.567551052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.582198686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586189923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586494625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.587318633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636541684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636617385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636635485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636992688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226774   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226774   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226826   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.129414501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226826   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130210008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130291809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130470711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147517467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147958771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226967   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148118573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226967   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148818379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227025   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.227025   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423300695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227025   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423802099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227101   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.424025901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227246   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.427457533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1048]: time="2024-06-03T12:46:32.704571107Z" level=info msg="ignoring event" container=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705364020Z" level=info msg="shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705622124Z" level=warning msg="cleaning up after shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705874328Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.728397491Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129026230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129403835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129427335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129696138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309701115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309935818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309957118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.310113120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316797286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316993688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317155090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317526994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899305562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899391863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899429263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899555364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.936994844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937073745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937090545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937338347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228116   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228183   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228183   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.261521   10844 logs.go:123] Gathering logs for dmesg ...
	I0603 05:47:16.261521   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 05:47:16.285629   10844 command_runner.go:130] > [Jun 3 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.129332] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.024453] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +0.058085] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +0.021687] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 05:47:16.286637   10844 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 05:47:16.286698   10844 command_runner.go:130] > [Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	I0603 05:47:16.288838   10844 logs.go:123] Gathering logs for coredns [8280b3904678] ...
	I0603 05:47:16.288838   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8280b3904678"
	I0603 05:47:16.321653   10844 command_runner.go:130] > .:53
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:16.321734   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:16.321734   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 127.0.0.1:42160 - 49231 "HINFO IN 7758649785632377755.6167658315586765337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046714522s
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 10.244.1.2:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279598s
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 10.244.1.2:58454 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208411566s
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 10.244.1.2:41741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.13626297s
	I0603 05:47:16.321815   10844 command_runner.go:130] > [INFO] 10.244.1.2:34878 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.105138942s
	I0603 05:47:16.321815   10844 command_runner.go:130] > [INFO] 10.244.0.3:55537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268797s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.0.3:46426 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000881s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.0.3:52879 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174998s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.0.3:43420 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000100699s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.1.2:58392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115599s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.1.2:44885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024455563s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.1.2:42255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000337996s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:41386 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245097s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:55181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012426179s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:35256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164099s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:57960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110199s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	I0603 05:47:16.322107   10844 command_runner.go:130] > [INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	I0603 05:47:16.322107   10844 command_runner.go:130] > [INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 05:47:16.325696   10844 logs.go:123] Gathering logs for kube-controller-manager [3d7dc29a5791] ...
	I0603 05:47:16.325696   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d7dc29a5791"
	I0603 05:47:16.351742   10844 command_runner.go:130] ! I0603 12:22:58.709734       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:16.351742   10844 command_runner.go:130] ! I0603 12:22:59.476409       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:16.352158   10844 command_runner.go:130] ! I0603 12:22:59.477144       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.352287   10844 command_runner.go:130] ! I0603 12:22:59.479107       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:22:59.479482       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:22:59.480446       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:22:59.480646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:23:03.879622       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:16.352413   10844 command_runner.go:130] ! I0603 12:23:03.880293       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:16.352413   10844 command_runner.go:130] ! I0603 12:23:03.880027       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:16.352498   10844 command_runner.go:130] ! I0603 12:23:03.898013       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.898158       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.898213       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.919140       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.919340       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.919371       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.929290       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.929541       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.981652       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960621       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960663       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960672       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960922       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960933       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.982079       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.983455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.983548       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:16.353068   10844 command_runner.go:130] ! E0603 12:23:14.000699       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:16.353068   10844 command_runner.go:130] ! I0603 12:23:14.000725       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:16.353118   10844 command_runner.go:130] ! I0603 12:23:14.000737       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.000744       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.014097       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.014549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.014579       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.039289       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.039520       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.039555       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.066064       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.066460       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.067547       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.080694       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.080928       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.080942       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.090915       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.091127       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.112300       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.112981       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.113168       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.115290       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.115472       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.115914       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.116287       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:16.356807   10844 command_runner.go:130] ! I0603 12:23:14.138094       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:16.357147   10844 command_runner.go:130] ! I0603 12:23:14.138554       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:16.357258   10844 command_runner.go:130] ! I0603 12:23:14.138571       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:16.357469   10844 command_runner.go:130] ! I0603 12:23:14.156457       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:16.357532   10844 command_runner.go:130] ! I0603 12:23:14.157066       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:16.357532   10844 command_runner.go:130] ! I0603 12:23:14.157201       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:16.357532   10844 command_runner.go:130] ! I0603 12:23:14.299010       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:16.358579   10844 command_runner.go:130] ! I0603 12:23:14.299494       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.299867       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.448653       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.448790       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.448807       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.598920       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.599459       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:16.358742   10844 command_runner.go:130] ! I0603 12:23:14.599613       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747435       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747595       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747608       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747617       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.794967       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.795092       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.795473       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.795623       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.796055       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.947799       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:14.947966       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:14.948148       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:15.253614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:15.253800       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:15.253851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:16.358999   10844 command_runner.go:130] ! W0603 12:23:15.253890       1 shared_informer.go:597] resyncPeriod 20h27m39.878927139s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:16.358999   10844 command_runner.go:130] ! I0603 12:23:15.254123       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:16.359199   10844 command_runner.go:130] ! I0603 12:23:15.254392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:16.359264   10844 command_runner.go:130] ! I0603 12:23:15.254514       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:16.359264   10844 command_runner.go:130] ! I0603 12:23:15.255105       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:16.359264   10844 command_runner.go:130] ! I0603 12:23:15.255639       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:16.359342   10844 command_runner.go:130] ! I0603 12:23:15.255930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:16.359342   10844 command_runner.go:130] ! I0603 12:23:15.256059       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:16.359342   10844 command_runner.go:130] ! I0603 12:23:15.256381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:16.359404   10844 command_runner.go:130] ! I0603 12:23:15.256652       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:16.359404   10844 command_runner.go:130] ! I0603 12:23:15.256978       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:16.359404   10844 command_runner.go:130] ! I0603 12:23:15.257200       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:16.359470   10844 command_runner.go:130] ! I0603 12:23:15.257574       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:16.359470   10844 command_runner.go:130] ! I0603 12:23:15.257864       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:16.359470   10844 command_runner.go:130] ! I0603 12:23:15.258216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:16.359533   10844 command_runner.go:130] ! W0603 12:23:15.258585       1 shared_informer.go:597] resyncPeriod 18h8m55.919288475s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:16.359533   10844 command_runner.go:130] ! I0603 12:23:15.258823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:16.359533   10844 command_runner.go:130] ! I0603 12:23:15.258977       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:16.359533   10844 command_runner.go:130] ! I0603 12:23:15.259197       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259267       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259531       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259645       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259859       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.400049       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.400251       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.400362       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.550028       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.550108       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:16.359717   10844 command_runner.go:130] ! I0603 12:23:15.550118       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:16.359779   10844 command_runner.go:130] ! I0603 12:23:15.744039       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:16.359846   10844 command_runner.go:130] ! I0603 12:23:15.744209       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:16.359909   10844 command_runner.go:130] ! I0603 12:23:15.744288       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:16.359909   10844 command_runner.go:130] ! I0603 12:23:15.744381       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:16.359966   10844 command_runner.go:130] ! E0603 12:23:15.795003       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:16.359966   10844 command_runner.go:130] ! I0603 12:23:15.795251       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:16.360044   10844 command_runner.go:130] ! I0603 12:23:15.951102       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:16.360044   10844 command_runner.go:130] ! I0603 12:23:15.951175       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:15.951186       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:16.103214       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:16.103538       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:16.103703       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:16.360244   10844 command_runner.go:130] ! I0603 12:23:16.152626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:16.360244   10844 command_runner.go:130] ! I0603 12:23:16.152712       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:16.360330   10844 command_runner.go:130] ! I0603 12:23:16.153331       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:16.360369   10844 command_runner.go:130] ! I0603 12:23:16.153697       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:16.360437   10844 command_runner.go:130] ! I0603 12:23:16.153983       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:16.360437   10844 command_runner.go:130] ! I0603 12:23:16.154153       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:16.360437   10844 command_runner.go:130] ! I0603 12:23:16.154254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.154552       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.155315       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.155447       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.155494       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360623   10844 command_runner.go:130] ! I0603 12:23:16.156193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360668   10844 command_runner.go:130] ! I0603 12:23:16.156626       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:16.360710   10844 command_runner.go:130] ! I0603 12:23:16.156664       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:16.360710   10844 command_runner.go:130] ! I0603 12:23:16.298448       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:16.360764   10844 command_runner.go:130] ! I0603 12:23:16.298743       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:16.360764   10844 command_runner.go:130] ! I0603 12:23:16.298803       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:16.360829   10844 command_runner.go:130] ! I0603 12:23:16.457482       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:16.360829   10844 command_runner.go:130] ! I0603 12:23:16.458106       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:16.360829   10844 command_runner.go:130] ! I0603 12:23:16.458255       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.603442       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.603819       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.603900       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.795254       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.795875       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.948611       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:16.361051   10844 command_runner.go:130] ! I0603 12:23:16.948652       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:16.361051   10844 command_runner.go:130] ! I0603 12:23:16.948726       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:16.949131       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.206218       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.206341       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.206354       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.443986       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.444026       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.444652       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.444677       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.702103       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.702517       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.702550       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:17.851156       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:17.851357       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:17.851370       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:18.000740       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:18.003147       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:18.003208       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:16.361435   10844 command_runner.go:130] ! I0603 12:23:18.013736       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:16.361435   10844 command_runner.go:130] ! I0603 12:23:18.042698       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:16.361435   10844 command_runner.go:130] ! I0603 12:23:18.049024       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.049393       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.049619       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.052020       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.052116       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.058451       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.063949       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.063997       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.064022       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.064027       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.064033       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.076606       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.097633       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.097738       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098286       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098375       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098877       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.100321       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.100587       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.103320       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.103450       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.103468       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.107067       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.108430       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.112806       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.113161       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.114212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400" podCIDRs=["10.244.0.0/24"]
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.114620       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.116662       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.120085       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.129657       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.139133       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.141026       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.152060       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.154508       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.154683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.156204       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.157708       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.159229       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.202824       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.204977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.213840       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.215208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.245546       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.260135       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.303335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.744986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.745263       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:18.809407       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:19.424454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="514.197479ms"
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 05:47:16.362504   10844 command_runner.go:130] ! I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 05:47:16.362504   10844 command_runner.go:130] ! I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 05:47:16.362547   10844 command_runner.go:130] ! I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 05:47:16.362547   10844 command_runner.go:130] ! I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 05:47:16.362547   10844 command_runner.go:130] ! I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 05:47:16.362790   10844 command_runner.go:130] ! I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:16.362790   10844 command_runner.go:130] ! I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:16.362790   10844 command_runner.go:130] ! I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 05:47:16.363084   10844 command_runner.go:130] ! I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363084   10844 command_runner.go:130] ! I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:16.363157   10844 command_runner.go:130] ! I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 05:47:16.363225   10844 command_runner.go:130] ! I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:16.363225   10844 command_runner.go:130] ! I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 05:47:16.363470   10844 command_runner.go:130] ! I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363470   10844 command_runner.go:130] ! I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.382240   10844 logs.go:123] Gathering logs for kindnet [3a08a76e2a79] ...
	I0603 05:47:16.382240   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a08a76e2a79"
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.050827       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051229       1 main.go:107] hostIP = 172.17.95.88
	I0603 05:47:16.409867   10844 command_runner.go:130] ! podIP = 172.17.95.88
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051377       1 main.go:116] setting mtu 1500 for CNI 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051397       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051417       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.483366       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.505262       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.505362       1 main.go:227] handling current node
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506144       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.94.201 Flags: [] Table: 0} 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506651       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506661       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506765       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:43.512187       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:43.512270       1 main.go:227] handling current node
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:43.512283       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:43.512290       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:43.512906       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:43.512944       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:53.529047       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:53.529290       1 main.go:227] handling current node
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.529365       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.529466       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.529947       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.530023       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545370       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545467       1 main.go:227] handling current node
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545481       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545487       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:03.545994       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:03.546064       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:13.562103       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:13.563112       1 main.go:227] handling current node
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:13.563361       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.411317   10844 command_runner.go:130] ! I0603 12:47:13.563375       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.411317   10844 command_runner.go:130] ! I0603 12:47:13.563657       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.411317   10844 command_runner.go:130] ! I0603 12:47:13.564016       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.415628   10844 logs.go:123] Gathering logs for describe nodes ...
	I0603 05:47:16.415658   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 05:47:16.623778   10844 command_runner.go:130] > Name:               multinode-316400
	I0603 05:47:16.624531   10844 command_runner.go:130] > Roles:              control-plane
	I0603 05:47:16.624531   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:16.624531   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:16.624531   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 05:47:16.624763   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:16.624763   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:16.624763   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:16.624763   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	I0603 05:47:16.624826   10844 command_runner.go:130] > Taints:             <none>
	I0603 05:47:16.624826   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:16.624826   10844 command_runner.go:130] > Lease:
	I0603 05:47:16.624826   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400
	I0603 05:47:16.624826   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:16.624870   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:47:12 +0000
	I0603 05:47:16.624870   10844 command_runner.go:130] > Conditions:
	I0603 05:47:16.624870   10844 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 05:47:16.624870   10844 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 05:47:16.624931   10844 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 05:47:16.624931   10844 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 05:47:16.624991   10844 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 05:47:16.624991   10844 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	I0603 05:47:16.624991   10844 command_runner.go:130] > Addresses:
	I0603 05:47:16.624991   10844 command_runner.go:130] >   InternalIP:  172.17.95.88
	I0603 05:47:16.624991   10844 command_runner.go:130] >   Hostname:    multinode-316400
	I0603 05:47:16.624991   10844 command_runner.go:130] > Capacity:
	I0603 05:47:16.624991   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.625075   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.625075   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.625075   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.625075   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.625075   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:16.625075   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.625075   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.625136   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.625136   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.625136   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.625136   10844 command_runner.go:130] > System Info:
	I0603 05:47:16.625136   10844 command_runner.go:130] >   Machine ID:                 babca97119de4d6fa999cc452dbf962d
	I0603 05:47:16.625136   10844 command_runner.go:130] >   System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:16.625216   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:16.625523   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:16.625564   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:16.625564   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:16.625564   10844 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 05:47:16.625564   10844 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 05:47:16.625625   10844 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 05:47:16.625625   10844 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:16.625625   10844 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 05:47:16.625625   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-pm79t                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:16.625625   10844 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0603 05:47:16.625734   10844 command_runner.go:130] >   kube-system                 etcd-multinode-316400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0603 05:47:16.625734   10844 command_runner.go:130] >   kube-system                 kindnet-4hpsl                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0603 05:47:16.625763   10844 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-316400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         74s
	I0603 05:47:16.625808   10844 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-316400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0603 05:47:16.625808   10844 command_runner.go:130] >   kube-system                 kube-proxy-ks64x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0603 05:47:16.625808   10844 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-316400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0603 05:47:16.625882   10844 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0603 05:47:16.625882   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:16.625882   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:16.625882   10844 command_runner.go:130] >   Resource           Requests     Limits
	I0603 05:47:16.625882   10844 command_runner.go:130] >   --------           --------     ------
	I0603 05:47:16.625882   10844 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0603 05:47:16.625882   10844 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0603 05:47:16.625970   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0603 05:47:16.625970   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0603 05:47:16.625970   10844 command_runner.go:130] > Events:
	I0603 05:47:16.625970   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:16.625970   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.626109   10844 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 05:47:16.626109   10844 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:16.626137   10844 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-316400 status is now: NodeReady
	I0603 05:47:16.626137   10844 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0603 05:47:16.626137   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.626189   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:16.626189   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.626189   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:16.626226   10844 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:16.626248   10844 command_runner.go:130] > Name:               multinode-316400-m02
	I0603 05:47:16.626248   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:16.626248   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:16.626248   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m02
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:16.626377   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:16.626430   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:16.626430   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	I0603 05:47:16.626430   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:16.626464   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:16.626464   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:16.626464   10844 command_runner.go:130] > Lease:
	I0603 05:47:16.626464   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m02
	I0603 05:47:16.626524   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:16.626524   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:47 +0000
	I0603 05:47:16.626524   10844 command_runner.go:130] > Conditions:
	I0603 05:47:16.626605   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:16.626605   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:16.626605   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] > Addresses:
	I0603 05:47:16.626667   10844 command_runner.go:130] >   InternalIP:  172.17.94.201
	I0603 05:47:16.626728   10844 command_runner.go:130] >   Hostname:    multinode-316400-m02
	I0603 05:47:16.626728   10844 command_runner.go:130] > Capacity:
	I0603 05:47:16.626728   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.626728   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.626728   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.626728   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.626728   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.626728   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:16.626801   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.626801   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.626801   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.626801   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.626801   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.626801   10844 command_runner.go:130] > System Info:
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	I0603 05:47:16.626861   10844 command_runner.go:130] >   System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:16.626861   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:16.626927   10844 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 05:47:16.626927   10844 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 05:47:16.626988   10844 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 05:47:16.626988   10844 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:16.626988   10844 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 05:47:16.626988   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-hmxqp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:16.627053   10844 command_runner.go:130] >   kube-system                 kindnet-789v5              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0603 05:47:16.627053   10844 command_runner.go:130] >   kube-system                 kube-proxy-z26hc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 05:47:16.627053   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:16.627053   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:16.627053   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:16.627114   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:16.627114   10844 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 05:47:16.627114   10844 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 05:47:16.627114   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 05:47:16.627114   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 05:47:16.627114   10844 command_runner.go:130] > Events:
	I0603 05:47:16.627201   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:16.627201   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:16.627201   10844 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0603 05:47:16.627201   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:16.627322   10844 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	I0603 05:47:16.627322   10844 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:16.627322   10844 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-316400-m02 status is now: NodeNotReady
	I0603 05:47:16.627322   10844 command_runner.go:130] > Name:               multinode-316400-m03
	I0603 05:47:16.627376   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:16.627376   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m03
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:16.627490   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:16.627490   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:16.627490   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:16.627490   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	I0603 05:47:16.627490   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:16.627655   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:16.627655   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:16.627655   10844 command_runner.go:130] > Lease:
	I0603 05:47:16.627655   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m03
	I0603 05:47:16.627655   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:16.627655   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	I0603 05:47:16.627716   10844 command_runner.go:130] > Conditions:
	I0603 05:47:16.627716   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:16.627716   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:16.627716   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627780   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627780   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627780   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627839   10844 command_runner.go:130] > Addresses:
	I0603 05:47:16.627839   10844 command_runner.go:130] >   InternalIP:  172.17.87.60
	I0603 05:47:16.627839   10844 command_runner.go:130] >   Hostname:    multinode-316400-m03
	I0603 05:47:16.627839   10844 command_runner.go:130] > Capacity:
	I0603 05:47:16.627839   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.627839   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.627839   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.627904   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.627904   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.627904   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:16.627904   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.627972   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.628091   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.628091   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.628091   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.628091   10844 command_runner.go:130] > System Info:
	I0603 05:47:16.628091   10844 command_runner.go:130] >   Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	I0603 05:47:16.628170   10844 command_runner.go:130] >   System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	I0603 05:47:16.628170   10844 command_runner.go:130] >   Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	I0603 05:47:16.628170   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:16.628170   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:16.628170   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:16.628238   10844 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 05:47:16.628299   10844 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 05:47:16.628299   10844 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 05:47:16.628299   10844 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:16.628299   10844 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 05:47:16.628299   10844 command_runner.go:130] >   kube-system                 kindnet-2g66r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0603 05:47:16.628367   10844 command_runner.go:130] >   kube-system                 kube-proxy-dl97g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0603 05:47:16.628367   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:16.628367   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:16.628367   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:16.628367   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:16.628444   10844 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 05:47:16.628444   10844 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 05:47:16.628444   10844 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 05:47:16.628444   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 05:47:16.628444   10844 command_runner.go:130] > Events:
	I0603 05:47:16.628444   10844 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 05:47:16.628505   10844 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 05:47:16.628505   10844 command_runner.go:130] >   Normal  Starting                 5m45s                  kube-proxy       
	I0603 05:47:16.628505   10844 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 05:47:16.628587   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.628587   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.628766   10844 command_runner.go:130] >   Normal  RegisteredNode           5m48s                  node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:16.628766   10844 command_runner.go:130] >   Normal  NodeReady                5m40s                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:16.628766   10844 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	I0603 05:47:16.628824   10844 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:16.639563   10844 logs.go:123] Gathering logs for kube-apiserver [a9b10f4d479a] ...
	I0603 05:47:16.639563   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9b10f4d479a"
	I0603 05:47:16.670721   10844 command_runner.go:130] ! I0603 12:45:57.403757       1 options.go:221] external host was not specified, using 172.17.95.88
	I0603 05:47:16.670721   10844 command_runner.go:130] ! I0603 12:45:57.406924       1 server.go:148] Version: v1.30.1
	I0603 05:47:16.671154   10844 command_runner.go:130] ! I0603 12:45:57.407254       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.671208   10844 command_runner.go:130] ! I0603 12:45:58.053920       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 05:47:16.671452   10844 command_runner.go:130] ! I0603 12:45:58.058845       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 05:47:16.671524   10844 command_runner.go:130] ! I0603 12:45:58.058955       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 05:47:16.671524   10844 command_runner.go:130] ! I0603 12:45:58.059338       1 instance.go:299] Using reconciler: lease
	I0603 05:47:16.671567   10844 command_runner.go:130] ! I0603 12:45:58.060201       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:16.671590   10844 command_runner.go:130] ! I0603 12:45:58.875148       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 05:47:16.671590   10844 command_runner.go:130] ! W0603 12:45:58.875563       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671639   10844 command_runner.go:130] ! I0603 12:45:59.142148       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 05:47:16.671639   10844 command_runner.go:130] ! I0603 12:45:59.142832       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 05:47:16.671639   10844 command_runner.go:130] ! I0603 12:45:59.377455       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! I0603 12:45:59.573170       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! I0603 12:45:59.586634       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 05:47:16.671707   10844 command_runner.go:130] ! W0603 12:45:59.586771       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! W0603 12:45:59.586784       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! I0603 12:45:59.588425       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 05:47:16.671771   10844 command_runner.go:130] ! W0603 12:45:59.588531       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671771   10844 command_runner.go:130] ! I0603 12:45:59.590497       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 05:47:16.671771   10844 command_runner.go:130] ! I0603 12:45:59.591820       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 05:47:16.671771   10844 command_runner.go:130] ! W0603 12:45:59.591914       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.591924       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! I0603 12:45:59.594253       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.594382       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! I0603 12:45:59.595963       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.596105       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.596117       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! I0603 12:45:59.597347       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.597459       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671972   10844 command_runner.go:130] ! W0603 12:45:59.597610       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671972   10844 command_runner.go:130] ! I0603 12:45:59.598635       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 05:47:16.671972   10844 command_runner.go:130] ! I0603 12:45:59.601013       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601125       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601136       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672032   10844 command_runner.go:130] ! I0603 12:45:59.601685       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601835       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601851       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672086   10844 command_runner.go:130] ! I0603 12:45:59.602906       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 05:47:16.672086   10844 command_runner.go:130] ! W0603 12:45:59.603027       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 05:47:16.672086   10844 command_runner.go:130] ! I0603 12:45:59.605451       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 05:47:16.672134   10844 command_runner.go:130] ! W0603 12:45:59.605590       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672134   10844 command_runner.go:130] ! W0603 12:45:59.605603       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672291   10844 command_runner.go:130] ! I0603 12:45:59.606823       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 05:47:16.672353   10844 command_runner.go:130] ! W0603 12:45:59.607057       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672353   10844 command_runner.go:130] ! W0603 12:45:59.607073       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672353   10844 command_runner.go:130] ! I0603 12:45:59.610997       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 05:47:16.672353   10844 command_runner.go:130] ! W0603 12:45:59.611141       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672411   10844 command_runner.go:130] ! W0603 12:45:59.611153       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672411   10844 command_runner.go:130] ! I0603 12:45:59.615262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 05:47:16.672411   10844 command_runner.go:130] ! I0603 12:45:59.618444       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 05:47:16.672411   10844 command_runner.go:130] ! W0603 12:45:59.618592       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 05:47:16.672484   10844 command_runner.go:130] ! W0603 12:45:59.618802       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672484   10844 command_runner.go:130] ! I0603 12:45:59.633959       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 05:47:16.672484   10844 command_runner.go:130] ! W0603 12:45:59.634179       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! W0603 12:45:59.634387       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! I0603 12:45:59.641016       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 05:47:16.672579   10844 command_runner.go:130] ! W0603 12:45:59.641203       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! W0603 12:45:59.641390       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! I0603 12:45:59.643262       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 05:47:16.672643   10844 command_runner.go:130] ! W0603 12:45:59.643611       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672643   10844 command_runner.go:130] ! I0603 12:45:59.665282       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 05:47:16.672643   10844 command_runner.go:130] ! W0603 12:45:59.665339       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672643   10844 command_runner.go:130] ! I0603 12:46:00.321072       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 05:47:16.672643   10844 command_runner.go:130] ! I0603 12:46:00.321338       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 05:47:16.672726   10844 command_runner.go:130] ! I0603 12:46:00.321510       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.672726   10844 command_runner.go:130] ! I0603 12:46:00.321684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:16.672726   10844 command_runner.go:130] ! I0603 12:46:00.322441       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.324839       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.324963       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.325383       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.331772       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.331819       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.331950       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 05:47:16.672874   10844 command_runner.go:130] ! I0603 12:46:00.331975       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 05:47:16.672874   10844 command_runner.go:130] ! I0603 12:46:00.331996       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 05:47:16.672874   10844 command_runner.go:130] ! I0603 12:46:00.332381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 05:47:16.673051   10844 command_runner.go:130] ! I0603 12:46:00.332390       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332464       1 controller.go:139] Starting OpenAPI controller
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332488       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332501       1 naming_controller.go:291] Starting NamingConditionController
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332512       1 establishing_controller.go:76] Starting EstablishingController
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.332528       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.332538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.332550       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.321340       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.325911       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.348350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.348672       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.325922       1 available_controller.go:423] Starting AvailableConditionController
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.350192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.325939       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.325949       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.368845       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.368878       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.451943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 05:47:16.673591   10844 command_runner.go:130] ! I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 05:47:16.673591   10844 command_runner.go:130] ! I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 05:47:16.673591   10844 command_runner.go:130] ! I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 05:47:16.673733   10844 command_runner.go:130] ! I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 05:47:16.673733   10844 command_runner.go:130] ! I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 05:47:16.673733   10844 command_runner.go:130] ! W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 05:47:16.673733   10844 command_runner.go:130] ! I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 05:47:16.673870   10844 command_runner.go:130] ! I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 05:47:16.673870   10844 command_runner.go:130] ! I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 05:47:16.673870   10844 command_runner.go:130] ! W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	I0603 05:47:16.683351   10844 logs.go:123] Gathering logs for kube-scheduler [f39be6db7a1f] ...
	I0603 05:47:16.683351   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f39be6db7a1f"
	I0603 05:47:16.717960   10844 command_runner.go:130] ! I0603 12:22:59.604855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:16.717960   10844 command_runner.go:130] ! W0603 12:23:00.885974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.886217       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.886249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.886344       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.957357       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.957471       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962196       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962588       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.975786       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.976030       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.976627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.976720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.977093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.977211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.977871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.978108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.978352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.718675   10844 command_runner.go:130] ! E0603 12:23:00.978554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.718675   10844 command_runner.go:130] ! W0603 12:23:00.978915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.718675   10844 command_runner.go:130] ! E0603 12:23:00.979166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.718812   10844 command_runner.go:130] ! W0603 12:23:00.979907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.718812   10844 command_runner.go:130] ! E0603 12:23:00.980156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.718812   10844 command_runner.go:130] ! W0603 12:23:00.980358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.718960   10844 command_runner.go:130] ! E0603 12:23:00.980393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.719009   10844 command_runner.go:130] ! W0603 12:23:00.980479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.980561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.980991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.981244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.981380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.981529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.981800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.981883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.981956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.982200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.982090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.982650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.982102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.982927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.795531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.795655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.838399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.838478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.861969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.862351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719612   10844 command_runner.go:130] ! W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.719612   10844 command_runner.go:130] ! E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.719612   10844 command_runner.go:130] ! W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719686   10844 command_runner.go:130] ! E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719686   10844 command_runner.go:130] ! W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.719763   10844 command_runner.go:130] ! E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.719763   10844 command_runner.go:130] ! W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719842   10844 command_runner.go:130] ! E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719842   10844 command_runner.go:130] ! W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.720057   10844 command_runner.go:130] ! E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.720107   10844 command_runner.go:130] ! W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.720107   10844 command_runner.go:130] ! E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.720107   10844 command_runner.go:130] ! W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.720180   10844 command_runner.go:130] ! E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.720180   10844 command_runner.go:130] ! W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.720244   10844 command_runner.go:130] ! E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.720244   10844 command_runner.go:130] ! W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.720315   10844 command_runner.go:130] ! E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.720315   10844 command_runner.go:130] ! I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:16.720315   10844 command_runner.go:130] ! E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 05:47:16.731704   10844 logs.go:123] Gathering logs for kube-proxy [09616a16042d] ...
	I0603 05:47:16.731704   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09616a16042d"
	I0603 05:47:16.773806   10844 command_runner.go:130] ! I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:16.774624   10844 command_runner.go:130] ! I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 05:47:16.774624   10844 command_runner.go:130] ! I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:16.774680   10844 command_runner.go:130] ! I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:16.774680   10844 command_runner.go:130] ! I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:16.774680   10844 command_runner.go:130] ! I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:16.774763   10844 command_runner.go:130] ! I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:16.774763   10844 command_runner.go:130] ! I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.774825   10844 command_runner.go:130] ! I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 05:47:16.774910   10844 command_runner.go:130] ! I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:16.775028   10844 command_runner.go:130] ! I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:16.775028   10844 command_runner.go:130] ! I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:16.775028   10844 command_runner.go:130] ! I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:16.778045   10844 logs.go:123] Gathering logs for kube-proxy [ad08c7b8f3af] ...
	I0603 05:47:16.778045   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad08c7b8f3af"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:16.816511   10844 command_runner.go:130] ! I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:16.816579   10844 command_runner.go:130] ! I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:16.822584   10844 logs.go:123] Gathering logs for kubelet ...
	I0603 05:47:16.822638   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825136    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825207    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.826137    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: E0603 12:45:50.827240    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552269    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552416    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552941    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: E0603 12:45:51.553003    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711442    1519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711544    1519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711817    1519 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.716147    1519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.748912    1519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.771826    1519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.772049    1519 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773407    1519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 05:47:16.855591   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773557    1519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-316400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774457    1519 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774557    1519 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.775200    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778084    1519 kubelet.go:400] "Attempting to sync node with API server"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778299    1519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778455    1519 kubelet.go:312] "Adding apiserver pod source"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.782054    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.782432    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.785611    1519 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.790640    1519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.793090    1519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.794605    1519 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.796156    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.796271    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.797002    1519 server.go:1264] "Started kubelet"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.798266    1519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.801861    1519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.802334    1519 server.go:455] "Adding debug handlers to kubelet server"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.803283    1519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.803500    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.818343    1519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.844408    1519 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.846586    1519 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859495    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="200ms"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.859675    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859801    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860191    1519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860329    1519 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860344    1519 factory.go:221] Registration of the systemd container factory successfully
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898244    1519 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898480    1519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898596    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899321    1519 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899417    1519 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899447    1519 policy_none.go:49] "None policy: Start"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.900544    1519 reconciler.go:26] "Reconciler: start to sync state"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907485    1519 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907527    1519 state_mem.go:35] "Initializing new in-memory state store"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.908237    1519 state_mem.go:75] "Updated machine memory state"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.913835    1519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 05:47:16.857019   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914035    1519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 05:47:16.857019   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914854    1519 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 05:47:16.857019   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.921784    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.927630    1519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-316400\" not found"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932254    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932281    1519 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932300    1519 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.935092    1519 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.940949    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.941116    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.948643    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.949875    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.957193    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.035350    1519 topology_manager.go:215] "Topology Admit Handler" podUID="29e4294fa112526de08d5737962f6330" podNamespace="kube-system" podName="kube-apiserver-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.036439    1519 topology_manager.go:215] "Topology Admit Handler" podUID="53c1415900cfae2b2544e26360f8c9e2" podNamespace="kube-system" podName="kube-controller-manager-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.037279    1519 topology_manager.go:215] "Topology Admit Handler" podUID="392dbbcc275890dd2b6fadbfc5aaee27" podNamespace="kube-system" podName="kube-scheduler-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.040156    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a77247d80dfdd462b8863b85ab8ad4bb" podNamespace="kube-system" podName="etcd-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041355    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf22fe66615444841b76ea00858c2d191b3808baedd9bc080bc40a07e173120c"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041413    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b8b906c7ece4b6d777a07a0cb2203eff03efdfae414479586ee928dfd93a0f"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041426    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8fbb688dfe331c1f384bb60f2e3169f09a613ebbfb33a15f502f1d3e605b1"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041486    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f0d5d979f878809d344310dbe1eff0bad9db5a6522da02c87fecce5e5aeee0"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.047918    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.063032    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="400ms"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.063221    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24225992b633386b5c5d178b106212b6c942a19a6f436ce076aaa359c121477"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.079235    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.093321    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.105962    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106038    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-certs\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106081    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-ca-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106112    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-ca-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106140    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-k8s-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106216    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392dbbcc275890dd2b6fadbfc5aaee27-kubeconfig\") pod \"kube-scheduler-multinode-316400\" (UID: \"392dbbcc275890dd2b6fadbfc5aaee27\") " pod="kube-system/kube-scheduler-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106252    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-data\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106274    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-k8s-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106301    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106335    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-flexvolume-dir\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-kubeconfig\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.108700    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f366fa802e02ad1c75f843781b4cf6b39c2e71e08ec4fb65114ebe9cbf4901"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.152637    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.154286    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.473402    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="800ms"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.556260    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.558340    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.691400    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.691528    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.943127    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.943173    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.142169    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.150065    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.247409    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.247587    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.250356    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.250413    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.274392    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="1.6s"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.360120    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.361915    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.861220    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:57 multinode-316400 kubelet[1519]: I0603 12:45:57.964214    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604617    1519 kubelet_node_status.go:112] "Node was previously registered" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604775    1519 kubelet_node_status.go:76] "Successfully registered node" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.606910    1519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.607771    1519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.608805    1519 setters.go:580] "Node became not ready" node="multinode-316400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T12:46:00Z","lastTransitionTime":"2024-06-03T12:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.691329    1519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-316400\" already exists" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.791033    1519 apiserver.go:52] "Watching apiserver"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798319    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a3523f27-9775-4c1f-812f-a667faa1bace" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4hrc6"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798930    1519 topology_manager.go:215] "Topology Admit Handler" podUID="6815ff24-537b-42f3-b8ee-4c3e13be89f7" podNamespace="kube-system" podName="kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800209    1519 topology_manager.go:215] "Topology Admit Handler" podUID="60c8f253-7e07-4f56-b1f2-e0032ac6a8ce" podNamespace="kube-system" podName="kube-proxy-ks64x"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800471    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa" podNamespace="kube-system" podName="storage-provisioner"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800813    1519 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.801153    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.801692    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-316400" podUID="5a3b396d-1240-4c67-b2f5-e5664e068bfe"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.802378    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.833818    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.848055    1519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.920366    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-cni-cfg\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923685    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-lib-modules\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923879    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-xtables-lock\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924084    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-xtables-lock\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924331    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbd73e44-9a7e-4b5f-93e5-d1621c837baa-tmp\") pod \"storage-provisioner\" (UID: \"bbd73e44-9a7e-4b5f-93e5-d1621c837baa\") " pod="kube-system/storage-provisioner"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924536    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-lib-modules\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.924884    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.925133    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.425053064 +0000 UTC m=+6.818668510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.947864    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171c5f025e4267e9949ddac2f1863980" path="/var/lib/kubelet/pods/171c5f025e4267e9949ddac2f1863980/volumes"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.949521    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79ce6c8ebbce53597babbe73b1962c9" path="/var/lib/kubelet/pods/b79ce6c8ebbce53597babbe73b1962c9/volumes"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.959965    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960012    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960141    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.460099085 +0000 UTC m=+6.853714631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.984966    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-316400" podStartSLOduration=0.984946212 podStartE2EDuration="984.946212ms" podCreationTimestamp="2024-06-03 12:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:00.911653941 +0000 UTC m=+6.305269487" watchObservedRunningTime="2024-06-03 12:46:00.984946212 +0000 UTC m=+6.378561658"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430112    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430199    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.430180493 +0000 UTC m=+7.823795939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532174    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532233    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532300    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.532282929 +0000 UTC m=+7.925898375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: I0603 12:46:01.863329    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.165874    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.352473    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.353470    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-316400" podUID="0cdcee20-9dca-4eca-b92f-a7214368dd5e"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.376913    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442116    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442214    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.442196268 +0000 UTC m=+9.835811814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543119    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543210    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543279    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.543260694 +0000 UTC m=+9.936876140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935003    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935334    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:03 multinode-316400 kubelet[1519]: I0603 12:46:03.466467    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-316400" podStartSLOduration=1.4664454550000001 podStartE2EDuration="1.466445455s" podCreationTimestamp="2024-06-03 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:03.412988665 +0000 UTC m=+8.806604211" watchObservedRunningTime="2024-06-03 12:46:03.466445455 +0000 UTC m=+8.860061001"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461035    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461144    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.461126571 +0000 UTC m=+13.854742017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562140    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562216    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562368    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.562318298 +0000 UTC m=+13.955933744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.917749    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935276    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935939    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935856    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497589    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497705    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.497687292 +0000 UTC m=+21.891302738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599269    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599402    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599472    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.599454365 +0000 UTC m=+21.993069911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933000    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933553    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:09 multinode-316400 kubelet[1519]: E0603 12:46:09.919522    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.933394    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.934072    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.933530    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.934829    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.920634    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.933278    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.934086    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.577469    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.578411    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.578339881 +0000 UTC m=+37.971955427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.677992    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678127    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678205    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.678184952 +0000 UTC m=+38.071800498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933065    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933791    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.934362    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.935128    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:19 multinode-316400 kubelet[1519]: E0603 12:46:19.922666    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.934372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.935099    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934047    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934767    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.924197    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.933388    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.934120    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.934350    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.935369    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.934504    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.935634    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:29 multinode-316400 kubelet[1519]: E0603 12:46:29.925755    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.933950    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.937812    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624555    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624639    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.624619316 +0000 UTC m=+70.018234762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726444    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726516    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726576    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.726559662 +0000 UTC m=+70.120175108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.933519    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.934365    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.841289    1519 scope.go:117] "RemoveContainer" containerID="f3d3a474bbe63a5e0e163d5c7d92c13e3e09cac96cc090c7077e648e1f08c5c7"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.842261    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: E0603 12:46:33.842518    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbd73e44-9a7e-4b5f-93e5-d1621c837baa)\"" pod="kube-system/storage-provisioner" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	I0603 05:47:16.909090   10844 logs.go:123] Gathering logs for etcd [ef3c01484867] ...
	I0603 05:47:16.909090   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3c01484867"
	I0603 05:47:16.947821   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.861568Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.863054Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.95.88:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.95.88:2380","--initial-cluster=multinode-316400=https://172.17.95.88:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.95.88:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.95.88:2380","--name=multinode-316400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.86357Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.864546Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.866457Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.95.88:2380"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.867148Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.884169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.885995Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-316400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.912835Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"25.475134ms"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.947133Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.990656Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","commit-index":1995}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=()"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became follower at term 2"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2227694153984668 [peers: [], term: 2, commit: 1995, applied: 0, lastindex: 1995, lastterm: 2]"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:57.005826Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.01104Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.018364Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.030883Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042399Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2227694153984668","timeout":"7s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042946Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2227694153984668"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.043072Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2227694153984668","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.046821Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047797Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 05:47:16.950126   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 05:47:16.950126   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	I0603 05:47:16.956883   10844 logs.go:123] Gathering logs for coredns [4241e2ff2dfe] ...
	I0603 05:47:16.956883   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4241e2ff2dfe"
	I0603 05:47:16.988242   10844 command_runner.go:130] > .:53
	I0603 05:47:16.989206   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:16.989252   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:16.989252   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:16.989252   10844 command_runner.go:130] > [INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	I0603 05:47:16.989538   10844 logs.go:123] Gathering logs for container status ...
	I0603 05:47:16.989647   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 05:47:17.056163   10844 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 05:47:17.056213   10844 command_runner.go:130] > c57e529e14789       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	I0603 05:47:17.056278   10844 command_runner.go:130] > 4241e2ff2dfe8       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:17.056278   10844 command_runner.go:130] > e1365acc9d8f5       6e38f40d628db                                                                                         33 seconds ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:17.056325   10844 command_runner.go:130] > 3a08a76e2a79b       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	I0603 05:47:17.056325   10844 command_runner.go:130] > eeba3616d7005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:17.056325   10844 command_runner.go:130] > 09616a16042d3       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	I0603 05:47:17.056391   10844 command_runner.go:130] > a9b10f4d479ac       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	I0603 05:47:17.056431   10844 command_runner.go:130] > ef3c014848675       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	I0603 05:47:17.056475   10844 command_runner.go:130] > 334bb0174b55e       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	I0603 05:47:17.056517   10844 command_runner.go:130] > cbaa09a85a643       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	I0603 05:47:17.056588   10844 command_runner.go:130] > ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	I0603 05:47:17.056588   10844 command_runner.go:130] > 8280b39046781       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:17.056627   10844 command_runner.go:130] > a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	I0603 05:47:17.056627   10844 command_runner.go:130] > ad08c7b8f3aff       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	I0603 05:47:17.056627   10844 command_runner.go:130] > f39be6db7a1f8       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	I0603 05:47:17.056627   10844 command_runner.go:130] > 3d7dc29a57912       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	I0603 05:47:17.058787   10844 logs.go:123] Gathering logs for kube-scheduler [334bb0174b55] ...
	I0603 05:47:17.059377   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 334bb0174b55"
	I0603 05:47:17.088120   10844 command_runner.go:130] ! I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:17.088578   10844 command_runner.go:130] ! W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:17.088620   10844 command_runner.go:130] ! W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:17.088666   10844 command_runner.go:130] ! W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:17.088731   10844 command_runner.go:130] ! W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:17.091051   10844 logs.go:123] Gathering logs for kube-controller-manager [cbaa09a85a64] ...
	I0603 05:47:17.091128   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa09a85a64"
	I0603 05:47:17.123816   10844 command_runner.go:130] ! I0603 12:45:57.870752       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:17.124610   10844 command_runner.go:130] ! I0603 12:45:58.526588       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:17.124610   10844 command_runner.go:130] ! I0603 12:45:58.526702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:17.124739   10844 command_runner.go:130] ! I0603 12:45:58.533907       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:17.124879   10844 command_runner.go:130] ! I0603 12:45:58.534542       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:17.125087   10844 command_runner.go:130] ! I0603 12:45:58.535842       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:17.125702   10844 command_runner.go:130] ! I0603 12:45:58.536233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:17.126041   10844 command_runner.go:130] ! I0603 12:46:02.398949       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:17.126114   10844 command_runner.go:130] ! I0603 12:46:02.399900       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:17.126215   10844 command_runner.go:130] ! I0603 12:46:02.435010       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:17.126282   10844 command_runner.go:130] ! I0603 12:46:02.435043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:17.126537   10844 command_runner.go:130] ! I0603 12:46:02.435076       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:17.126615   10844 command_runner.go:130] ! I0603 12:46:02.435752       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:17.126828   10844 command_runner.go:130] ! I0603 12:46:02.494257       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:17.126942   10844 command_runner.go:130] ! I0603 12:46:02.494484       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:17.126942   10844 command_runner.go:130] ! I0603 12:46:02.501595       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:17.126997   10844 command_runner.go:130] ! E0603 12:46:02.503053       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:17.127173   10844 command_runner.go:130] ! I0603 12:46:02.503101       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:17.127212   10844 command_runner.go:130] ! I0603 12:46:02.506314       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:17.127447   10844 command_runner.go:130] ! I0603 12:46:02.511488       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.511970       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.516592       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.520190       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.521481       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.521500       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.522419       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.522531       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.522539       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.527263       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.527284       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.528477       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.528534       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:17.128043   10844 command_runner.go:130] ! I0603 12:46:02.528980       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:17.128043   10844 command_runner.go:130] ! I0603 12:46:02.529023       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:17.128043   10844 command_runner.go:130] ! I0603 12:46:02.529029       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.532164       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.532658       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.532787       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.537982       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.538156       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.540497       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.545135       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:17.128958   10844 command_runner.go:130] ! I0603 12:46:02.545508       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:17.129003   10844 command_runner.go:130] ! I0603 12:46:02.546501       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.548466       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.551407       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.551542       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552105       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552249       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552280       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552956       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.564031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.564743       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.565277       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.565424       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.571139       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.571233       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.572399       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.572466       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.573181       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.573205       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.574887       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.582200       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:17.129591   10844 command_runner.go:130] ! I0603 12:46:02.582364       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:17.129591   10844 command_runner.go:130] ! I0603 12:46:02.582373       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:17.129591   10844 command_runner.go:130] ! I0603 12:46:02.588602       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:02.591240       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:12.612297       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:12.612483       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:12.613381       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:17.129798   10844 command_runner.go:130] ! I0603 12:46:12.623612       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:17.129798   10844 command_runner.go:130] ! I0603 12:46:12.628478       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:17.129798   10844 command_runner.go:130] ! I0603 12:46:12.628951       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:17.129845   10844 command_runner.go:130] ! I0603 12:46:12.629235       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:17.129845   10844 command_runner.go:130] ! I0603 12:46:12.652905       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:17.129888   10844 command_runner.go:130] ! I0603 12:46:12.652988       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:17.129888   10844 command_runner.go:130] ! I0603 12:46:12.653246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:17.129926   10844 command_runner.go:130] ! I0603 12:46:12.673155       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:17.129945   10844 command_runner.go:130] ! I0603 12:46:12.673199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:17.129945   10844 command_runner.go:130] ! I0603 12:46:12.673508       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:17.130051   10844 command_runner.go:130] ! I0603 12:46:12.673789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:17.130119   10844 command_runner.go:130] ! I0603 12:46:12.674494       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:17.130119   10844 command_runner.go:130] ! I0603 12:46:12.674611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:17.130196   10844 command_runner.go:130] ! I0603 12:46:12.674812       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:17.130239   10844 command_runner.go:130] ! I0603 12:46:12.675099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:17.130449   10844 command_runner.go:130] ! I0603 12:46:12.675266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:17.130500   10844 command_runner.go:130] ! I0603 12:46:12.675397       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:17.131143   10844 command_runner.go:130] ! I0603 12:46:12.675422       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:17.131448   10844 command_runner.go:130] ! I0603 12:46:12.675675       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:17.131448   10844 command_runner.go:130] ! I0603 12:46:12.675833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:17.131930   10844 command_runner.go:130] ! I0603 12:46:12.675905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:17.132870   10844 command_runner.go:130] ! I0603 12:46:12.676018       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:17.133365   10844 command_runner.go:130] ! I0603 12:46:12.676230       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:17.133424   10844 command_runner.go:130] ! I0603 12:46:12.676428       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676474       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676746       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676879       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676991       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.677057       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.677159       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:17.133765   10844 command_runner.go:130] ! I0603 12:46:12.677261       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:17.133765   10844 command_runner.go:130] ! I0603 12:46:12.679809       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:17.133824   10844 command_runner.go:130] ! I0603 12:46:12.680265       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.680400       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.696376       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.697035       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.697121       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.699870       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.700035       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.700365       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.707376       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.708196       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.708250       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.715601       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.716125       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.716429       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.725280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.725365       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.726123       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.734528       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.734935       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.735117       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.737491       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.737773       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.737858       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743270       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743591       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743640       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743648       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748185       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748266       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748498       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748532       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.748553       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749140       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749625       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749663       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.749683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.749897       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.750105       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.750568       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.753301       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.753662       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.753804       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.754382       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.754576       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.757083       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.757524       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.758174       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.760247       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.760686       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.760938       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.772698       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.772922       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.774148       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:17.134824   10844 command_runner.go:130] ! E0603 12:46:12.775996       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:17.134943   10844 command_runner.go:130] ! I0603 12:46:12.776034       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:17.134943   10844 command_runner.go:130] ! I0603 12:46:12.779294       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:17.135005   10844 command_runner.go:130] ! I0603 12:46:12.779452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:17.135005   10844 command_runner.go:130] ! I0603 12:46:12.780268       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:17.135066   10844 command_runner.go:130] ! I0603 12:46:12.783043       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:17.135066   10844 command_runner.go:130] ! I0603 12:46:12.783634       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:17.135108   10844 command_runner.go:130] ! I0603 12:46:12.783847       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:17.135166   10844 command_runner.go:130] ! I0603 12:46:12.783962       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:17.135166   10844 command_runner.go:130] ! I0603 12:46:12.792655       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.801373       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.817303       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.821609       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.829238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.135617   10844 command_runner.go:130] ! I0603 12:46:12.832397       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:17.136197   10844 command_runner.go:130] ! I0603 12:46:12.832809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.136197   10844 command_runner.go:130] ! I0603 12:46:12.833093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:17.136738   10844 command_runner.go:130] ! I0603 12:46:12.833264       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:17.136876   10844 command_runner.go:130] ! I0603 12:46:12.833561       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:17.136953   10844 command_runner.go:130] ! I0603 12:46:12.833878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.136953   10844 command_runner.go:130] ! I0603 12:46:12.835226       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.840542       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.846790       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.849319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.849497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.851129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.851147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.852109       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.854406       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.854923       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.867259       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.873525       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.874696       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.876061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.880612       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.880650       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.884270       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.896673       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.897786       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.909588       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.922202       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.923485       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.923685       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.924158       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:17.138539   10844 command_runner.go:130] ! I0603 12:46:12.924516       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.924851       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.924952       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.928113       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.929667       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.959523       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.963250       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.029808       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.030293       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.038277       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.044424       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064984       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.077763       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.083477       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.093778       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.100897       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:17.139484   10844 command_runner.go:130] ! I0603 12:46:13.133780       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:17.139484   10844 command_runner.go:130] ! I0603 12:46:13.164944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.004317ms"
	I0603 05:47:17.139744   10844 command_runner.go:130] ! I0603 12:46:13.168328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.004µs"
	I0603 05:47:17.139817   10844 command_runner.go:130] ! I0603 12:46:13.172600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="212.304157ms"
	I0603 05:47:17.139851   10844 command_runner.go:130] ! I0603 12:46:13.173022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.001µs"
	I0603 05:47:17.139851   10844 command_runner.go:130] ! I0603 12:46:13.502035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:17.139851   10844 command_runner.go:130] ! I0603 12:46:13.535943       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:17.139881   10844 command_runner.go:130] ! I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:17.139881   10844 command_runner.go:130] ! I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.139881   10844 command_runner.go:130] ! I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 05:47:17.139939   10844 command_runner.go:130] ! I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 05:47:17.139973   10844 command_runner.go:130] ! I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 05:47:17.140012   10844 command_runner.go:130] ! I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 05:47:17.140012   10844 command_runner.go:130] ! I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 05:47:17.140041   10844 command_runner.go:130] ! I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 05:47:17.140079   10844 command_runner.go:130] ! I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 05:47:17.158490   10844 logs.go:123] Gathering logs for kindnet [a00a9dc2a937] ...
	I0603 05:47:17.158490   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00a9dc2a937"
	I0603 05:47:17.191117   10844 command_runner.go:130] ! I0603 12:32:18.810917       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:18.811413       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:18.811451       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:28.826592       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:28.826645       1 main.go:227] handling current node
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.826658       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.826665       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.827203       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.827288       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840141       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840209       1 main.go:227] handling current node
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840223       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840230       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:38.840630       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:38.840646       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850171       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850276       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850292       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850299       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850729       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850876       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.856606       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.857034       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.857296       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.857510       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.858637       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.858677       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864801       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864826       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864838       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.865310       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.865474       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872391       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872568       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872599       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872624       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872804       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872959       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886324       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886350       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886362       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886368       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886918       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886985       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:38.893626       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:38.893899       1 main.go:227] handling current node
	I0603 05:47:17.192338   10844 command_runner.go:130] ! I0603 12:33:38.893916       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192338   10844 command_runner.go:130] ! I0603 12:33:38.894181       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192338   10844 command_runner.go:130] ! I0603 12:33:38.894556       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:38.894647       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910837       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910878       1 main.go:227] handling current node
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910891       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910896       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.911015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.911041       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192482   10844 command_runner.go:130] ! I0603 12:33:58.926167       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192482   10844 command_runner.go:130] ! I0603 12:33:58.926268       1 main.go:227] handling current node
	I0603 05:47:17.192513   10844 command_runner.go:130] ! I0603 12:33:58.926284       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192534   10844 command_runner.go:130] ! I0603 12:33:58.926291       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192534   10844 command_runner.go:130] ! I0603 12:33:58.927007       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192569   10844 command_runner.go:130] ! I0603 12:33:58.927131       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192569   10844 command_runner.go:130] ! I0603 12:34:08.937101       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192569   10844 command_runner.go:130] ! I0603 12:34:08.937131       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937150       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937284       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937292       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943292       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943378       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943532       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943590       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950687       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950853       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950870       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950878       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.951068       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.951084       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.965710       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967355       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967377       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967388       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967555       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967566       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.975988       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976117       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976134       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976142       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976817       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976852       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.991312       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.991846       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.991984       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.992011       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.992262       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.992331       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999119       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999230       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999369       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999483       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999604       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999616       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193171   10844 command_runner.go:130] ! I0603 12:35:19.007514       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193171   10844 command_runner.go:130] ! I0603 12:35:19.007620       1 main.go:227] handling current node
	I0603 05:47:17.193171   10844 command_runner.go:130] ! I0603 12:35:19.007635       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:19.007642       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:19.007957       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:19.007986       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:29.013983       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193293   10844 command_runner.go:130] ! I0603 12:35:29.014066       1 main.go:227] handling current node
	I0603 05:47:17.193293   10844 command_runner.go:130] ! I0603 12:35:29.014081       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:29.014088       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:29.014429       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:29.014444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:39.025261       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193397   10844 command_runner.go:130] ! I0603 12:35:39.025288       1 main.go:227] handling current node
	I0603 05:47:17.193397   10844 command_runner.go:130] ! I0603 12:35:39.025300       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193397   10844 command_runner.go:130] ! I0603 12:35:39.025306       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193437   10844 command_runner.go:130] ! I0603 12:35:39.025682       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193437   10844 command_runner.go:130] ! I0603 12:35:39.025828       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193485   10844 command_runner.go:130] ! I0603 12:35:49.038248       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193485   10844 command_runner.go:130] ! I0603 12:35:49.039013       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.039143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.039662       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.040380       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.040438       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052205       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052297       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052328       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052410       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052577       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052607       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059926       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059974       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059988       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059995       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.060515       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.060532       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.069521       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.069928       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.070204       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.070309       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.070978       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.071168       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084376       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084614       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084689       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084804       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.085015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.085100       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098298       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098419       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098435       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098444       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098942       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.099083       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109724       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109872       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109894       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.110382       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.110466       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.116904       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117061       1 main.go:227] handling current node
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117150       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117281       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117621       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:36:59.117713       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.133187       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.133597       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.133807       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.134149       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.134720       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.134902       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141246       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141257       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141386       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141456       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151018       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151126       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151147       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151156       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151810       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.152019       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165415       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165510       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165524       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165530       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.166173       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.166270       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181247       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181371       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181387       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181412       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:49.181852       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:49.182176       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189418       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189528       1 main.go:227] handling current node
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189544       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189552       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.190394       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.190480       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197274       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197415       1 main.go:227] handling current node
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197432       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197440       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197851       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:09.197933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204632       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204793       1 main.go:227] handling current node
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204826       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204835       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.205144       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:19.205251       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:29.213406       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:29.213503       1 main.go:227] handling current node
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:29.213518       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:29.213524       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:29.213644       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:29.213655       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:39.229128       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:39.229187       1 main.go:227] handling current node
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229199       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229205       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229332       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229344       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:49.245014       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245069       1 main.go:227] handling current node
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245084       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245091       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245355       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245382       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252267       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252359       1 main.go:227] handling current node
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252371       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.260367       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.260444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270366       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270476       1 main.go:227] handling current node
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270490       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270544       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:09.270869       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:09.271060       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277515       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277615       1 main.go:227] handling current node
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277631       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277638       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195736   10844 command_runner.go:130] ! I0603 12:39:19.278259       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:19.278516       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287007       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287102       1 main.go:227] handling current node
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287117       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287124       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287246       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287329       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293747       1 main.go:227] handling current node
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293802       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293812       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.294185       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.294225       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304527       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304629       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304643       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304651       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304863       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.305107       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314751       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314846       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314860       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314866       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314992       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.315004       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321649       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321868       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321895       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.322451       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.322470       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336642       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336845       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336864       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336872       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.337002       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.337011       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350352       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350468       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350484       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350493       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350956       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.351085       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366296       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366357       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366370       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366518       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366548       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:49.371036       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371174       1 main.go:227] handling current node
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371189       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371218       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371340       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371368       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:59.386603       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196715   10844 command_runner.go:130] ! I0603 12:40:59.387024       1 main.go:227] handling current node
	I0603 05:47:17.196715   10844 command_runner.go:130] ! I0603 12:40:59.387122       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196759   10844 command_runner.go:130] ! I0603 12:40:59.387140       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196759   10844 command_runner.go:130] ! I0603 12:40:59.387625       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:40:59.387909       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401524       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401658       1 main.go:227] handling current node
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401746       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:09.402106       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:09.402238       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:19.408360       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:19.408404       1 main.go:227] handling current node
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:19.408417       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:19.408423       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:19.408530       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:19.408541       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:29.414703       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:29.414865       1 main.go:227] handling current node
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.414881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.414889       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.415393       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.415619       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.415702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426331       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426441       1 main.go:227] handling current node
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426455       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426462       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426731       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:39.426795       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436724       1 main.go:227] handling current node
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436739       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436745       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:49.437162       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:49.437250       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449377       1 main.go:227] handling current node
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:41:59.449801       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:41:59.449916       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:42:09.464583       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:42:09.464690       1 main.go:227] handling current node
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:42:09.464705       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:09.464713       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:09.465435       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:09.465537       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:19.473928       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474029       1 main.go:227] handling current node
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474044       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474052       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474454       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474552       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480280       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480469       1 main.go:227] handling current node
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480606       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480686       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.481023       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:29.481213       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492462       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492634       1 main.go:227] handling current node
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492669       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492711       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197795   10844 command_runner.go:130] ! I0603 12:42:39.492930       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197795   10844 command_runner.go:130] ! I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197983   10844 command_runner.go:130] ! I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.198012   10844 command_runner.go:130] ! I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.198012   10844 command_runner.go:130] ! I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:19.717891   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:47:19.717891   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.717891   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.717891   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.725244   10844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:47:19.725244   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Audit-Id: 590bb11a-8aa1-4a7d-a20e-40318993805e
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.725244   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.725244   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.726297   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0603 05:47:19.730172   10844 system_pods.go:59] 12 kube-system pods found
	I0603 05:47:19.730172   10844 system_pods.go:61] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "etcd-multinode-316400" [8509d96a-4449-4656-8237-d194d2980506] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kindnet-2g66r" [3e88e85f-e61e-427f-944a-97b0ba90d219] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kindnet-789v5" [d3147209-4266-4963-a4a6-05a024412c7b] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-apiserver-multinode-316400" [1c07a75f-fb00-4529-a699-378974ce494b] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-proxy-dl97g" [78665ab7-c6dd-4381-8b29-75df4d31eff1] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-proxy-z26hc" [983da576-c697-4bdd-8908-93ec5b571787] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running
	I0603 05:47:19.730172   10844 system_pods.go:74] duration metric: took 3.7929055s to wait for pod list to return data ...
	I0603 05:47:19.730172   10844 default_sa.go:34] waiting for default service account to be created ...
	I0603 05:47:19.731186   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/default/serviceaccounts
	I0603 05:47:19.731186   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.731186   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.731186   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.734358   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:19.734358   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Audit-Id: 16f2fd83-5fb5-428e-9796-be58f6e6c124
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.734358   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.734358   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Content-Length: 262
	I0603 05:47:19.734358   10844 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"995f775d-e30c-4872-957a-b91ade4bf666","resourceVersion":"318","creationTimestamp":"2024-06-03T12:23:18Z"}}]}
	I0603 05:47:19.734358   10844 default_sa.go:45] found service account: "default"
	I0603 05:47:19.734358   10844 default_sa.go:55] duration metric: took 4.1865ms for default service account to be created ...
	I0603 05:47:19.734358   10844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 05:47:19.734358   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:47:19.734358   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.734358   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.734358   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.740471   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:19.740660   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Audit-Id: ef05b071-8ada-4ff8-8a77-1135879cf8cc
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.740660   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.740660   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.741997   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0603 05:47:19.745713   10844 system_pods.go:86] 12 kube-system pods found
	I0603 05:47:19.745713   10844 system_pods.go:89] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "etcd-multinode-316400" [8509d96a-4449-4656-8237-d194d2980506] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kindnet-2g66r" [3e88e85f-e61e-427f-944a-97b0ba90d219] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kindnet-789v5" [d3147209-4266-4963-a4a6-05a024412c7b] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-apiserver-multinode-316400" [1c07a75f-fb00-4529-a699-378974ce494b] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-proxy-dl97g" [78665ab7-c6dd-4381-8b29-75df4d31eff1] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-proxy-z26hc" [983da576-c697-4bdd-8908-93ec5b571787] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running
	I0603 05:47:19.745713   10844 system_pods.go:126] duration metric: took 11.3549ms to wait for k8s-apps to be running ...
	I0603 05:47:19.745713   10844 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 05:47:19.756674   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:47:19.782100   10844 system_svc.go:56] duration metric: took 36.3864ms WaitForService to wait for kubelet
	I0603 05:47:19.782100   10844 kubeadm.go:576] duration metric: took 1m14.5362368s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:47:19.782100   10844 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:47:19.782303   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes
	I0603 05:47:19.782303   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.782303   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.782303   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.786813   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:19.786850   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Audit-Id: 6a9eee4d-325a-45a9-be62-e3006bdc5c5d
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.786850   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.786850   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.786850   10844 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16255 chars]
	I0603 05:47:19.787943   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:47:19.787943   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:47:19.787943   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:47:19.787943   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:47:19.787943   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:47:19.787943   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:47:19.787943   10844 node_conditions.go:105] duration metric: took 5.6975ms to run NodePressure ...
	I0603 05:47:19.787943   10844 start.go:240] waiting for startup goroutines ...
	I0603 05:47:19.787943   10844 start.go:245] waiting for cluster config update ...
	I0603 05:47:19.787943   10844 start.go:254] writing updated cluster config ...
	I0603 05:47:19.792158   10844 out.go:177] 
	I0603 05:47:19.794354   10844 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:47:19.805336   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:47:19.805336   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:47:19.811364   10844 out.go:177] * Starting "multinode-316400-m02" worker node in "multinode-316400" cluster
	I0603 05:47:19.815354   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:47:19.815354   10844 cache.go:56] Caching tarball of preloaded images
	I0603 05:47:19.816352   10844 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:47:19.816352   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:47:19.816352   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:47:19.818350   10844 start.go:360] acquireMachinesLock for multinode-316400-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:47:19.818350   10844 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-316400-m02"
	I0603 05:47:19.818350   10844 start.go:96] Skipping create...Using existing machine configuration
	I0603 05:47:19.818350   10844 fix.go:54] fixHost starting: m02
	I0603 05:47:19.819344   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:22.081876   10844 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:47:22.082897   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:22.082897   10844 fix.go:112] recreateIfNeeded on multinode-316400-m02: state=Stopped err=<nil>
	W0603 05:47:22.083091   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 05:47:22.086873   10844 out.go:177] * Restarting existing hyperv VM for "multinode-316400-m02" ...
	I0603 05:47:22.090352   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400-m02
	I0603 05:47:25.177596   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:25.177745   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:25.177745   10844 main.go:141] libmachine: Waiting for host to start...
	I0603 05:47:25.177790   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:27.516419   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:27.516419   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:27.516419   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:30.077582   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:30.077582   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:31.078225   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:33.355128   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:33.355128   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:33.355904   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:35.905456   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:35.905456   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:36.913898   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:39.176413   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:39.176413   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:39.177413   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:41.755387   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:41.755427   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:42.761024   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:45.071712   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:45.071712   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:45.072525   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:47.686340   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:47.686340   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:48.692467   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:50.978637   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:50.978637   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:50.978822   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:53.613203   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:47:53.613203   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:53.616126   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:55.784628   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:55.784628   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:55.785211   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:58.445682   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:47:58.445682   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:58.446801   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:47:58.449014   10844 machine.go:94] provisionDockerMachine start ...
	I0603 05:47:58.449014   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:00.666433   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:00.666433   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:00.667076   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:03.331108   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:03.331762   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:03.338508   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:03.339260   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:03.339260   10844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 05:48:03.470788   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 05:48:03.470788   10844 buildroot.go:166] provisioning hostname "multinode-316400-m02"
	I0603 05:48:03.470903   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:05.649556   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:05.649556   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:05.650153   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:08.274779   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:08.274779   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:08.280851   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:08.281014   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:08.281014   10844 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-316400-m02 && echo "multinode-316400-m02" | sudo tee /etc/hostname
	I0603 05:48:08.428162   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-316400-m02
	
	I0603 05:48:08.428162   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:10.699606   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:10.700197   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:10.700197   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:13.389916   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:13.390110   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:13.395698   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:13.396393   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:13.396393   10844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-316400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-316400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-316400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 05:48:13.547970   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 05:48:13.547970   10844 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 05:48:13.548521   10844 buildroot.go:174] setting up certificates
	I0603 05:48:13.548521   10844 provision.go:84] configureAuth start
	I0603 05:48:13.548521   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:15.748465   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:15.748711   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:15.748711   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:18.332996   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:18.333904   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:18.333904   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:20.484982   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:20.484982   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:20.486701   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:23.049799   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:23.050676   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:23.050676   10844 provision.go:143] copyHostCerts
	I0603 05:48:23.050846   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 05:48:23.050846   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 05:48:23.050846   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 05:48:23.051663   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 05:48:23.052829   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 05:48:23.053142   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 05:48:23.053142   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 05:48:23.053460   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 05:48:23.054495   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 05:48:23.054908   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 05:48:23.054908   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 05:48:23.055434   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 05:48:23.057051   10844 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-316400-m02 san=[127.0.0.1 172.17.91.9 localhost minikube multinode-316400-m02]
	I0603 05:48:23.193883   10844 provision.go:177] copyRemoteCerts
	I0603 05:48:23.208162   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 05:48:23.208162   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:25.424419   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:25.424419   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:25.424618   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:28.040370   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:28.040370   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:28.040799   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:48:28.149364   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9411837s)
	I0603 05:48:28.149364   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 05:48:28.150063   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 05:48:28.198182   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 05:48:28.198603   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0603 05:48:28.245379   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 05:48:28.245683   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 05:48:28.292188   10844 provision.go:87] duration metric: took 14.7436125s to configureAuth
	I0603 05:48:28.292285   10844 buildroot.go:189] setting minikube options for container-runtime
	I0603 05:48:28.292916   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:48:28.293003   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:30.448803   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:30.448803   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:30.449848   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:33.050710   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:33.050710   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:33.057619   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:33.057986   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:33.057986   10844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 05:48:33.200799   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 05:48:33.200954   10844 buildroot.go:70] root file system type: tmpfs
	I0603 05:48:33.201163   10844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 05:48:33.201227   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:35.362575   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:35.362575   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:35.362844   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:37.974012   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:37.974012   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:37.979346   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:37.979742   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:37.979900   10844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.95.88"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 05:48:38.135723   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.95.88
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 05:48:38.135723   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:40.335497   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:40.335641   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:40.335641   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:42.919471   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:42.919471   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:42.925214   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:42.925740   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:42.925740   10844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 05:48:45.226379   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 05:48:45.226489   10844 machine.go:97] duration metric: took 46.7772462s to provisionDockerMachine
	I0603 05:48:45.226489   10844 start.go:293] postStartSetup for "multinode-316400-m02" (driver="hyperv")
	I0603 05:48:45.226568   10844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 05:48:45.241815   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 05:48:45.241815   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:47.428646   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:47.428646   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:47.428744   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:50.025859   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:50.025859   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:50.026193   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:48:50.138638   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8968048s)
	I0603 05:48:50.152701   10844 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 05:48:50.160195   10844 command_runner.go:130] > NAME=Buildroot
	I0603 05:48:50.160377   10844 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 05:48:50.160377   10844 command_runner.go:130] > ID=buildroot
	I0603 05:48:50.160377   10844 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 05:48:50.160377   10844 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 05:48:50.160463   10844 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 05:48:50.160499   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 05:48:50.160898   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 05:48:50.161878   10844 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 05:48:50.161878   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 05:48:50.172632   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 05:48:50.201911   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 05:48:50.255528   10844 start.go:296] duration metric: took 5.0289406s for postStartSetup
	I0603 05:48:50.255528   10844 fix.go:56] duration metric: took 1m30.4368433s for fixHost
	I0603 05:48:50.255528   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:52.492419   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:52.493398   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:52.493398   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:55.138723   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:55.139728   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:55.145249   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:55.145962   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:55.145962   10844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 05:48:55.274953   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418935.277931907
	
	I0603 05:48:55.274953   10844 fix.go:216] guest clock: 1717418935.277931907
	I0603 05:48:55.274953   10844 fix.go:229] Guest: 2024-06-03 05:48:55.277931907 -0700 PDT Remote: 2024-06-03 05:48:50.255528 -0700 PDT m=+301.525318401 (delta=5.022403907s)
	I0603 05:48:55.275139   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:57.443591   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:57.443591   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:57.443728   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:59.950446   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:59.950446   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:59.967909   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:59.968575   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:59.968575   10844 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717418935
	I0603 05:49:00.114262   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:48:55 UTC 2024
	
	I0603 05:49:00.114363   10844 fix.go:236] clock set: Mon Jun  3 12:48:55 UTC 2024
	 (err=<nil>)
	I0603 05:49:00.114363   10844 start.go:83] releasing machines lock for "multinode-316400-m02", held for 1m40.2956418s
	I0603 05:49:00.114572   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:02.234097   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:02.237638   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:02.237722   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:04.718851   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:04.718851   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:04.730665   10844 out.go:177] * Found network options:
	I0603 05:49:04.737150   10844 out.go:177]   - NO_PROXY=172.17.95.88
	W0603 05:49:04.743341   10844 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 05:49:04.745019   10844 out.go:177]   - NO_PROXY=172.17.95.88
	W0603 05:49:04.750280   10844 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 05:49:04.751601   10844 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 05:49:04.754611   10844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 05:49:04.755144   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:04.763656   10844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 05:49:04.763656   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:06.978121   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:06.978121   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:06.978440   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:06.981691   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:06.981745   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:06.981892   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:09.640036   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:09.640036   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:09.640442   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:49:09.673237   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:09.673237   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:09.673843   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:49:09.730668   10844 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0603 05:49:09.736701   10844 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9730256s)
	W0603 05:49:09.736959   10844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 05:49:09.748391   10844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 05:49:09.836734   10844 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 05:49:09.836791   10844 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0819933s)
	I0603 05:49:09.836843   10844 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 05:49:09.836941   10844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 05:49:09.836941   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:49:09.837157   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:49:09.875932   10844 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 05:49:09.885983   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 05:49:09.918464   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 05:49:09.938501   10844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 05:49:09.952161   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 05:49:09.987310   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:49:10.023512   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 05:49:10.054653   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:49:10.089787   10844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 05:49:10.120953   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 05:49:10.150956   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 05:49:10.181682   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 05:49:10.216356   10844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 05:49:10.241134   10844 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 05:49:10.251875   10844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 05:49:10.283072   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:10.488010   10844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 05:49:10.522433   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:49:10.538331   10844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 05:49:10.561454   10844 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 05:49:10.561573   10844 command_runner.go:130] > [Unit]
	I0603 05:49:10.561573   10844 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 05:49:10.561625   10844 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 05:49:10.561625   10844 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 05:49:10.561691   10844 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 05:49:10.561691   10844 command_runner.go:130] > StartLimitBurst=3
	I0603 05:49:10.561691   10844 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 05:49:10.561756   10844 command_runner.go:130] > [Service]
	I0603 05:49:10.561756   10844 command_runner.go:130] > Type=notify
	I0603 05:49:10.561756   10844 command_runner.go:130] > Restart=on-failure
	I0603 05:49:10.561823   10844 command_runner.go:130] > Environment=NO_PROXY=172.17.95.88
	I0603 05:49:10.561823   10844 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 05:49:10.561902   10844 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 05:49:10.561902   10844 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 05:49:10.561902   10844 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 05:49:10.561998   10844 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 05:49:10.561998   10844 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 05:49:10.561998   10844 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 05:49:10.561998   10844 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 05:49:10.562097   10844 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 05:49:10.562097   10844 command_runner.go:130] > ExecStart=
	I0603 05:49:10.562157   10844 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 05:49:10.562157   10844 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 05:49:10.562227   10844 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 05:49:10.562227   10844 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 05:49:10.562294   10844 command_runner.go:130] > LimitNOFILE=infinity
	I0603 05:49:10.562294   10844 command_runner.go:130] > LimitNPROC=infinity
	I0603 05:49:10.562294   10844 command_runner.go:130] > LimitCORE=infinity
	I0603 05:49:10.562360   10844 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 05:49:10.562360   10844 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 05:49:10.562360   10844 command_runner.go:130] > TasksMax=infinity
	I0603 05:49:10.562360   10844 command_runner.go:130] > TimeoutStartSec=0
	I0603 05:49:10.562360   10844 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 05:49:10.562360   10844 command_runner.go:130] > Delegate=yes
	I0603 05:49:10.562360   10844 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 05:49:10.562360   10844 command_runner.go:130] > KillMode=process
	I0603 05:49:10.562360   10844 command_runner.go:130] > [Install]
	I0603 05:49:10.562360   10844 command_runner.go:130] > WantedBy=multi-user.target
	I0603 05:49:10.577467   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:49:10.608695   10844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 05:49:10.654614   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:49:10.688461   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:49:10.728285   10844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 05:49:10.793137   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:49:10.817940   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:49:10.863069   10844 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 05:49:10.874688   10844 ssh_runner.go:195] Run: which cri-dockerd
	I0603 05:49:10.882131   10844 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 05:49:10.892486   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 05:49:10.911746   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 05:49:10.954511   10844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 05:49:11.144475   10844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 05:49:11.325909   10844 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 05:49:11.326022   10844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 05:49:11.371944   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:11.570302   10844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:49:14.135923   10844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5656111s)
	I0603 05:49:14.147984   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 05:49:14.184481   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:49:14.221951   10844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 05:49:14.415388   10844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 05:49:14.622075   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:14.816359   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 05:49:14.860866   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:49:14.892485   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:15.079416   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 05:49:15.193252   10844 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 05:49:15.205956   10844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 05:49:15.212570   10844 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 05:49:15.212570   10844 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 05:49:15.212570   10844 command_runner.go:130] > Device: 0,22	Inode: 854         Links: 1
	I0603 05:49:15.212570   10844 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 05:49:15.212570   10844 command_runner.go:130] > Access: 2024-06-03 12:49:15.111881513 +0000
	I0603 05:49:15.212570   10844 command_runner.go:130] > Modify: 2024-06-03 12:49:15.111881513 +0000
	I0603 05:49:15.212570   10844 command_runner.go:130] > Change: 2024-06-03 12:49:15.114881530 +0000
	I0603 05:49:15.212570   10844 command_runner.go:130] >  Birth: -
	I0603 05:49:15.216853   10844 start.go:562] Will wait 60s for crictl version
	I0603 05:49:15.229461   10844 ssh_runner.go:195] Run: which crictl
	I0603 05:49:15.236729   10844 command_runner.go:130] > /usr/bin/crictl
	I0603 05:49:15.256025   10844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 05:49:15.313213   10844 command_runner.go:130] > Version:  0.1.0
	I0603 05:49:15.313213   10844 command_runner.go:130] > RuntimeName:  docker
	I0603 05:49:15.313213   10844 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 05:49:15.313213   10844 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 05:49:15.313817   10844 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 05:49:15.324990   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:49:15.354571   10844 command_runner.go:130] > 26.0.2
	I0603 05:49:15.365374   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:49:15.393158   10844 command_runner.go:130] > 26.0.2
	I0603 05:49:15.398082   10844 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 05:49:15.400685   10844 out.go:177]   - env NO_PROXY=172.17.95.88
	I0603 05:49:15.404180   10844 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 05:49:15.413324   10844 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 05:49:15.413324   10844 ip.go:210] interface addr: 172.17.80.1/20
	I0603 05:49:15.429278   10844 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 05:49:15.431950   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:49:15.456370   10844 mustload.go:65] Loading cluster: multinode-316400
	I0603 05:49:15.456536   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:15.457770   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:17.562243   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:17.573709   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:17.573709   10844 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:49:17.574620   10844 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400 for IP: 172.17.91.9
	I0603 05:49:17.574620   10844 certs.go:194] generating shared ca certs ...
	I0603 05:49:17.574620   10844 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:49:17.575254   10844 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 05:49:17.575670   10844 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 05:49:17.576058   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 05:49:17.576421   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 05:49:17.576421   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 05:49:17.576421   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 05:49:17.577171   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 05:49:17.577171   10844 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 05:49:17.577171   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 05:49:17.577710   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 05:49:17.577941   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 05:49:17.578248   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 05:49:17.578653   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 05:49:17.579018   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 05:49:17.579180   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 05:49:17.579324   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:17.579482   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 05:49:17.631236   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 05:49:17.680481   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 05:49:17.745777   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 05:49:17.792222   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 05:49:17.831392   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 05:49:17.885838   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 05:49:17.939129   10844 ssh_runner.go:195] Run: openssl version
	I0603 05:49:17.952972   10844 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 05:49:17.967564   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 05:49:18.004003   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.009514   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.012306   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.022162   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.026375   10844 command_runner.go:130] > b5213941
	I0603 05:49:18.043670   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 05:49:18.075704   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 05:49:18.106909   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.109224   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.113754   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.123984   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.133227   10844 command_runner.go:130] > 51391683
	I0603 05:49:18.146681   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 05:49:18.177866   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 05:49:18.209994   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.213215   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.213215   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.218931   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.230842   10844 command_runner.go:130] > 3ec20f2e
	I0603 05:49:18.249230   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 05:49:18.277306   10844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:49:18.284259   10844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:49:18.288414   10844 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:49:18.288737   10844 kubeadm.go:928] updating node {m02 172.17.91.9 8443 v1.30.1 docker false true} ...
	I0603 05:49:18.288985   10844 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-316400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.91.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 05:49:18.299487   10844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 05:49:18.320708   10844 command_runner.go:130] > kubeadm
	I0603 05:49:18.320708   10844 command_runner.go:130] > kubectl
	I0603 05:49:18.320708   10844 command_runner.go:130] > kubelet
	I0603 05:49:18.320708   10844 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 05:49:18.332329   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0603 05:49:18.350964   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 05:49:18.383162   10844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 05:49:18.426655   10844 ssh_runner.go:195] Run: grep 172.17.95.88	control-plane.minikube.internal$ /etc/hosts
	I0603 05:49:18.428927   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:49:18.463976   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:18.655362   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:49:18.682591   10844 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:49:18.686530   10844 start.go:316] joinCluster: &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:49:18.686658   10844 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:18.686721   10844 host.go:66] Checking if "multinode-316400-m02" exists ...
	I0603 05:49:18.686947   10844 mustload.go:65] Loading cluster: multinode-316400
	I0603 05:49:18.688092   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:18.688779   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:20.856197   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:20.856197   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:20.856197   10844 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:49:20.858384   10844 api_server.go:166] Checking apiserver status ...
	I0603 05:49:20.871249   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:49:20.871249   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:23.036998   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:23.036998   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:23.037280   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:25.581615   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:49:25.581615   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:25.585695   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:49:25.705225   10844 command_runner.go:130] > 1862
	I0603 05:49:25.705225   10844 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.8339572s)
	I0603 05:49:25.717194   10844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1862/cgroup
	W0603 05:49:25.736329   10844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1862/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 05:49:25.747732   10844 ssh_runner.go:195] Run: ls
	I0603 05:49:25.759177   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:49:25.765473   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 200:
	ok
	I0603 05:49:25.777102   10844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-316400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0603 05:49:25.932078   10844 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-789v5, kube-system/kube-proxy-z26hc
	I0603 05:49:28.961323   10844 command_runner.go:130] > node/multinode-316400-m02 cordoned
	I0603 05:49:28.961374   10844 command_runner.go:130] > pod "busybox-fc5497c4f-hmxqp" has DeletionTimestamp older than 1 seconds, skipping
	I0603 05:49:28.961412   10844 command_runner.go:130] > node/multinode-316400-m02 drained
	I0603 05:49:28.961412   10844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-316400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1842987s)
	I0603 05:49:28.961412   10844 node.go:128] successfully drained node "multinode-316400-m02"
	I0603 05:49:28.961412   10844 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0603 05:49:28.961412   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:31.096075   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:31.096075   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:31.107314   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:33.701759   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:33.701963   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:33.701963   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:49:34.170937   10844 command_runner.go:130] ! W0603 12:49:34.176875    1540 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0603 05:49:34.699261   10844 command_runner.go:130] ! W0603 12:49:34.704908    1540 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 0994b46a73710b77f0a814bb946c1582e328418dabcdbfe77e547a83bd77a0ce: output: E0603 12:49:34.393381    1579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-hmxqp_default\" network: cni config uninitialized" podSandboxID="0994b46a73710b77f0a814bb946c1582e328418dabcdbfe77e547a83bd77a0ce"
	I0603 05:49:34.699261   10844 command_runner.go:130] ! time="2024-06-03T12:49:34Z" level=fatal msg="stopping the pod sandbox \"0994b46a73710b77f0a814bb946c1582e328418dabcdbfe77e547a83bd77a0ce\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-hmxqp_default\" network: cni config uninitialized"
	I0603 05:49:34.699261   10844 command_runner.go:130] ! : exit status 1
	I0603 05:49:34.725187   10844 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Stopping the kubelet service
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0603 05:49:34.725187   10844 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0603 05:49:34.725187   10844 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0603 05:49:34.725187   10844 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0603 05:49:34.725187   10844 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0603 05:49:34.725187   10844 command_runner.go:130] > to reset your system's IPVS tables.
	I0603 05:49:34.725187   10844 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0603 05:49:34.725187   10844 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0603 05:49:34.725187   10844 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.7637527s)
	I0603 05:49:34.725187   10844 node.go:155] successfully reset node "multinode-316400-m02"
	I0603 05:49:34.726605   10844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:49:34.727376   10844 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.95.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:49:34.728791   10844 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 05:49:34.729245   10844 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0603 05:49:34.729328   10844 round_trippers.go:463] DELETE https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:34.729394   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:34.729394   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:34.729394   10844 round_trippers.go:473]     Content-Type: application/json
	I0603 05:49:34.729394   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:34.745054   10844 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 05:49:34.745054   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:34.745054   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:34.745054   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Content-Length: 171
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:34 GMT
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Audit-Id: 3873f445-5b68-4d5c-a635-4ffa42a6e4c2
	I0603 05:49:34.745054   10844 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-316400-m02","kind":"nodes","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136"}}
	I0603 05:49:34.745054   10844 node.go:180] successfully deleted node "multinode-316400-m02"
	I0603 05:49:34.745054   10844 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:34.745054   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 05:49:34.745054   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:36.852948   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:36.852948   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:36.853149   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:39.367798   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:49:39.368018   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:39.368187   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:49:39.554752   10844 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yb71c2.xo9vol9vszz2kqx7 --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 05:49:39.554752   10844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8096798s)
	I0603 05:49:39.554752   10844 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:39.554752   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yb71c2.xo9vol9vszz2kqx7 --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-316400-m02"
	I0603 05:49:39.762907   10844 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 05:49:41.615948   10844 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 05:49:41.615948   10844 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0603 05:49:41.615948   10844 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0603 05:49:41.615948   10844 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:49:41.615948   10844 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 505.331217ms
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0603 05:49:41.616157   10844 command_runner.go:130] > This node has joined the cluster:
	I0603 05:49:41.616157   10844 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0603 05:49:41.616157   10844 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0603 05:49:41.616225   10844 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0603 05:49:41.616225   10844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yb71c2.xo9vol9vszz2kqx7 --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-316400-m02": (2.0614656s)
	I0603 05:49:41.616322   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 05:49:41.822049   10844 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0603 05:49:42.035385   10844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-316400-m02 minikube.k8s.io/updated_at=2024_06_03T05_49_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=multinode-316400 minikube.k8s.io/primary=false
	I0603 05:49:42.158406   10844 command_runner.go:130] > node/multinode-316400-m02 labeled
	I0603 05:49:42.158520   10844 start.go:318] duration metric: took 23.4719694s to joinCluster
	I0603 05:49:42.158713   10844 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:42.161447   10844 out.go:177] * Verifying Kubernetes components...
	I0603 05:49:42.159509   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:42.173773   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:42.368715   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:49:42.394241   10844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:49:42.394850   10844 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.95.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:49:42.395703   10844 node_ready.go:35] waiting up to 6m0s for node "multinode-316400-m02" to be "Ready" ...
	I0603 05:49:42.395869   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:42.395941   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:42.395941   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:42.395941   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:42.396182   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:42.396182   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Audit-Id: d8c749ae-4814-4b84-8902-12d268e26370
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:42.399931   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:42.399931   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:42 GMT
	I0603 05:49:42.400133   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2096","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3563 chars]
	I0603 05:49:42.897461   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:42.897700   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:42.897700   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:42.897777   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:42.906010   10844 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:49:42.906067   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:42.906104   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:42.906104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:42.906104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:42.906153   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:42 GMT
	I0603 05:49:42.906153   10844 round_trippers.go:580]     Audit-Id: 05bb6fd4-de17-4aa7-a2e5-2202c5bedbb4
	I0603 05:49:42.906189   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:42.909322   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2096","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3563 chars]
	I0603 05:49:43.397782   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:43.397782   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:43.397782   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:43.397782   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:43.402349   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:49:43.402349   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:43.402349   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:43.402349   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:43 GMT
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Audit-Id: 1338c209-a00c-42ee-a21c-edadda92c1e5
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:43.402349   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:43.899939   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:43.900175   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:43.900175   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:43.900175   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:43.900979   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:43.906217   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:43.906217   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:43.906217   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:43 GMT
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Audit-Id: 3d8fbded-525b-46b9-b728-c2dc9c943698
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:43.906505   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:44.396688   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:44.396727   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:44.396727   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:44.396727   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:44.397390   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:44.397390   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:44.397390   10844 round_trippers.go:580]     Audit-Id: 6fb6719b-9dd9-4bb8-9021-53bb90f7e450
	I0603 05:49:44.400755   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:44.400755   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:44.400755   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:44.400755   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:44.400813   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:44 GMT
	I0603 05:49:44.401066   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:44.401433   10844 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:49:44.904994   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:44.905092   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:44.905092   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:44.905092   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:44.905364   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:44.909320   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:44.909320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:44.909320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:44.909412   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:44 GMT
	I0603 05:49:44.909412   10844 round_trippers.go:580]     Audit-Id: d3a9d372-c76f-4189-b7d3-44c2b405af28
	I0603 05:49:44.909676   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:44.909676   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:44.909770   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:45.410634   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:45.410883   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.410883   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.410883   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.411655   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.411655   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.411655   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.415633   10844 round_trippers.go:580]     Audit-Id: 00ce39cb-d49d-4f91-817c-d53ac3fa186b
	I0603 05:49:45.415633   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.415633   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.415633   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.415633   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.415753   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2120","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3930 chars]
	I0603 05:49:45.416242   10844 node_ready.go:49] node "multinode-316400-m02" has status "Ready":"True"
	I0603 05:49:45.416311   10844 node_ready.go:38] duration metric: took 3.020596s for node "multinode-316400-m02" to be "Ready" ...
	I0603 05:49:45.416311   10844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:49:45.416515   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:49:45.416548   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.416548   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.416548   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.417243   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.417243   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.417243   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Audit-Id: f7c56023-9148-4a9d-acaa-840d53030101
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.417243   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.423182   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2122"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86024 chars]
	I0603 05:49:45.427677   10844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.427677   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:49:45.427677   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.427677   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.427677   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.428910   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:49:45.428910   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Audit-Id: 5a3795cd-40fc-408b-b70c-0a2710cead91
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.428910   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.428910   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.431883   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0603 05:49:45.432593   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.432593   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.432593   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.432593   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.433195   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.433195   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.433195   10844 round_trippers.go:580]     Audit-Id: 082c65f7-87a9-4ebd-a987-57e708a740f0
	I0603 05:49:45.435976   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.435976   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.435976   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.435976   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.435976   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.436113   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.436113   10844 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.436113   10844 pod_ready.go:81] duration metric: took 8.4365ms for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.436113   10844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.436866   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:49:45.436866   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.436866   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.436866   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.437673   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.446287   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.446287   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.446287   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Audit-Id: 90a18fc8-c241-415f-9c9c-c71f861fd851
	I0603 05:49:45.446520   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1822","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0603 05:49:45.446579   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.446579   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.446579   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.446579   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.447424   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.447424   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.447424   10844 round_trippers.go:580]     Audit-Id: 1fd7a857-3f08-49c5-b7ae-959c201290fa
	I0603 05:49:45.447424   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.447424   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.447424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.447424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.449766   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.450164   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.450468   10844 pod_ready.go:92] pod "etcd-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.450468   10844 pod_ready.go:81] duration metric: took 14.3544ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.450468   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.450468   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:49:45.450468   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.450468   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.450468   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.451786   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.451786   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.451786   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.451786   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Audit-Id: 7db2269e-f6a7-4bc5-8297-a1b2a6ef4016
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.454244   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"1c07a75f-fb00-4529-a699-378974ce494b","resourceVersion":"1830","creationTimestamp":"2024-06-03T12:46:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.95.88:8443","kubernetes.io/config.hash":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.mirror":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.seen":"2024-06-03T12:45:54.794125775Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0603 05:49:45.454804   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.454880   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.454880   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.454880   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.459322   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:49:45.459322   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.459322   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.459322   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Audit-Id: dfd9a18c-1737-40d1-a6da-d3d242d6ae0d
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.459976   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.459976   10844 pod_ready.go:92] pod "kube-apiserver-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.459976   10844 pod_ready.go:81] duration metric: took 9.5085ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.459976   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.460506   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:49:45.460506   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.460506   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.460506   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.462665   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:49:45.462665   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Audit-Id: fc26c92e-023f-4ba6-91de-cd7534a68bcc
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.462665   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.462665   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.462665   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"1827","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0603 05:49:45.464900   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.464900   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.464900   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.464900   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.467621   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:49:45.467621   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.467621   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.467878   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Audit-Id: 36a90c04-c89f-4867-bf8c-431f216e2fcb
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.467878   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.468521   10844 pod_ready.go:92] pod "kube-controller-manager-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.468521   10844 pod_ready.go:81] duration metric: took 8.5451ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.468521   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.614161   10844 request.go:629] Waited for 145.3885ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:49:45.614259   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:49:45.614259   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.614259   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.614338   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.615028   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.620267   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.620267   10844 round_trippers.go:580]     Audit-Id: 04e3436e-5a68-4df6-b2b2-571a7f7b2132
	I0603 05:49:45.620342   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.620342   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.620342   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.620342   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.620342   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.620342   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dl97g","generateName":"kube-proxy-","namespace":"kube-system","uid":"78665ab7-c6dd-4381-8b29-75df4d31eff1","resourceVersion":"1713","creationTimestamp":"2024-06-03T12:30:58Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:30:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0603 05:49:45.816763   10844 request.go:629] Waited for 195.4242ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:49:45.817044   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:49:45.817044   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.817044   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.817044   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.820793   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.820920   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.820920   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Audit-Id: 11958586-c4d0-48ae-b9b9-84d6750e3875
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.820920   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.821110   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m03","uid":"39dbcb4e-fdeb-4463-8bde-9cfa6cead308","resourceVersion":"1870","creationTimestamp":"2024-06-03T12:41:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_41_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:41:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0603 05:49:45.821640   10844 pod_ready.go:97] node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:49:45.821640   10844 pod_ready.go:81] duration metric: took 353.1169ms for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	E0603 05:49:45.821640   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:49:45.821702   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.032115   10844 request.go:629] Waited for 210.3253ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:49:46.032300   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:49:46.032300   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.032300   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.032300   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.042131   10844 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:49:46.042131   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.042131   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.042131   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Audit-Id: 62566718-8d4b-4699-a4cc-7886732694dd
	I0603 05:49:46.042781   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"1752","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0603 05:49:46.221415   10844 request.go:629] Waited for 177.7458ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:46.221591   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:46.221633   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.221662   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.221662   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.240234   10844 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0603 05:49:46.240234   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Audit-Id: 1396cfb1-a9a5-43a5-975b-490df236ae25
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.240234   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.240234   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.240234   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:46.241041   10844 pod_ready.go:92] pod "kube-proxy-ks64x" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:46.241041   10844 pod_ready.go:81] duration metric: took 419.3381ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.241041   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.416386   10844 request.go:629] Waited for 175.2773ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:49:46.416668   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:49:46.416668   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.416668   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.416668   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.417465   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:46.417465   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.417465   10844 round_trippers.go:580]     Audit-Id: c90f8487-609e-453d-8ef5-8fa13630e6f3
	I0603 05:49:46.417465   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.417465   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.417465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.417465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.421002   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.421235   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"2109","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5827 chars]
	I0603 05:49:46.617230   10844 request.go:629] Waited for 195.6847ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:46.617428   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:46.617428   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.617515   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.617515   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.620015   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:49:46.620015   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Audit-Id: 204ab35f-7933-4708-b919-db41266f7ff0
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.621915   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.621915   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.622196   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2120","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3930 chars]
	I0603 05:49:46.622196   10844 pod_ready.go:92] pod "kube-proxy-z26hc" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:46.622196   10844 pod_ready.go:81] duration metric: took 381.1527ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.622781   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.822893   10844 request.go:629] Waited for 199.9125ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:49:46.823241   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:49:46.823241   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.823241   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.823241   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.827121   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:49:46.827121   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Audit-Id: ce2c5bb2-51ad-4b20-98e6-24f26c42614f
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.827121   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.827121   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.827490   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"1854","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0603 05:49:47.032112   10844 request.go:629] Waited for 203.7651ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:47.032226   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:47.032226   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:47.032226   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:47.032226   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:47.032656   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:47.036414   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:47.036414   10844 round_trippers.go:580]     Audit-Id: e724a5ed-d3e2-446f-a725-b40e1e16f1b8
	I0603 05:49:47.036414   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:47.036475   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:47.036475   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:47.036475   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:47.036475   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:47 GMT
	I0603 05:49:47.037014   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:47.037223   10844 pod_ready.go:92] pod "kube-scheduler-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:47.037223   10844 pod_ready.go:81] duration metric: took 414.4404ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:47.037223   10844 pod_ready.go:38] duration metric: took 1.6209055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:49:47.037223   10844 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 05:49:47.048053   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:49:47.076257   10844 system_svc.go:56] duration metric: took 39.0339ms WaitForService to wait for kubelet
	I0603 05:49:47.076387   10844 kubeadm.go:576] duration metric: took 4.9175675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:49:47.076417   10844 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:49:47.219101   10844 request.go:629] Waited for 142.474ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes
	I0603 05:49:47.219280   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes
	I0603 05:49:47.219280   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:47.219280   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:47.219280   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:47.221237   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:49:47.221237   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:47.224899   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:47.224899   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:47 GMT
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Audit-Id: cb212733-8a10-4a2a-a0e9-e149c1518781
	I0603 05:49:47.225867   10844 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2126"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15603 chars]
	I0603 05:49:47.226390   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:49:47.226390   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:49:47.226390   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:49:47.226390   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:49:47.226390   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:49:47.226390   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:49:47.226390   10844 node_conditions.go:105] duration metric: took 149.9718ms to run NodePressure ...
	I0603 05:49:47.226390   10844 start.go:240] waiting for startup goroutines ...
	I0603 05:49:47.227911   10844 start.go:254] writing updated cluster config ...
	I0603 05:49:47.232345   10844 out.go:177] 
	I0603 05:49:47.235167   10844 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:47.243999   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:47.243999   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:49:47.250527   10844 out.go:177] * Starting "multinode-316400-m03" worker node in "multinode-316400" cluster
	I0603 05:49:47.250914   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:49:47.250914   10844 cache.go:56] Caching tarball of preloaded images
	I0603 05:49:47.253175   10844 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:49:47.253175   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:49:47.253175   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:49:47.258993   10844 start.go:360] acquireMachinesLock for multinode-316400-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:49:47.258993   10844 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-316400-m03"
	I0603 05:49:47.258993   10844 start.go:96] Skipping create...Using existing machine configuration
	I0603 05:49:47.258993   10844 fix.go:54] fixHost starting: m03
	I0603 05:49:47.259555   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	I0603 05:49:49.323790   10844 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:49:49.334695   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:49.334695   10844 fix.go:112] recreateIfNeeded on multinode-316400-m03: state=Stopped err=<nil>
	W0603 05:49:49.334695   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 05:49:49.338755   10844 out.go:177] * Restarting existing hyperv VM for "multinode-316400-m03" ...
	I0603 05:49:49.341412   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400-m03
	I0603 05:49:52.380595   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:49:52.385198   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:52.385198   10844 main.go:141] libmachine: Waiting for host to start...
	I0603 05:49:52.385271   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	I0603 05:49:54.655816   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:54.666064   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:54.666064   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:57.202789   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:49:57.202789   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:58.219033   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	I0603 05:50:00.417157   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:50:00.429036   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:50:00.429036   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-316400" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-316400
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-316400: context deadline exceeded (63.7µs)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-316400" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-316400	172.17.87.47
multinode-316400-m02	172.17.94.201
multinode-316400-m03	172.17.87.60

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-316400 -n multinode-316400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-316400 -n multinode-316400: (12.2572083s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 logs -n 25: (11.4532081s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-316400 cp testdata\cp-test.txt                                                                                 | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:34 PDT | 03 Jun 24 05:34 PDT |
	|         | multinode-316400-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:34 PDT | 03 Jun 24 05:34 PDT |
	|         | multinode-316400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:34 PDT | 03 Jun 24 05:34 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:34 PDT | 03 Jun 24 05:35 PDT |
	|         | multinode-316400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:35 PDT | 03 Jun 24 05:35 PDT |
	|         | multinode-316400:/home/docker/cp-test_multinode-316400-m02_multinode-316400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:35 PDT | 03 Jun 24 05:35 PDT |
	|         | multinode-316400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n multinode-316400 sudo cat                                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:35 PDT | 03 Jun 24 05:35 PDT |
	|         | /home/docker/cp-test_multinode-316400-m02_multinode-316400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:35 PDT | 03 Jun 24 05:35 PDT |
	|         | multinode-316400-m03:/home/docker/cp-test_multinode-316400-m02_multinode-316400-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:35 PDT | 03 Jun 24 05:36 PDT |
	|         | multinode-316400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n multinode-316400-m03 sudo cat                                                                    | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:36 PDT | 03 Jun 24 05:36 PDT |
	|         | /home/docker/cp-test_multinode-316400-m02_multinode-316400-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp testdata\cp-test.txt                                                                                 | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:36 PDT | 03 Jun 24 05:36 PDT |
	|         | multinode-316400-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:36 PDT | 03 Jun 24 05:36 PDT |
	|         | multinode-316400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:36 PDT | 03 Jun 24 05:36 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:36 PDT | 03 Jun 24 05:36 PDT |
	|         | multinode-316400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:36 PDT | 03 Jun 24 05:37 PDT |
	|         | multinode-316400:/home/docker/cp-test_multinode-316400-m03_multinode-316400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:37 PDT | 03 Jun 24 05:37 PDT |
	|         | multinode-316400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n multinode-316400 sudo cat                                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:37 PDT | 03 Jun 24 05:37 PDT |
	|         | /home/docker/cp-test_multinode-316400-m03_multinode-316400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt                                                        | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:37 PDT | 03 Jun 24 05:37 PDT |
	|         | multinode-316400-m02:/home/docker/cp-test_multinode-316400-m03_multinode-316400-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n                                                                                                  | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:37 PDT | 03 Jun 24 05:37 PDT |
	|         | multinode-316400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-316400 ssh -n multinode-316400-m02 sudo cat                                                                    | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:37 PDT | 03 Jun 24 05:37 PDT |
	|         | /home/docker/cp-test_multinode-316400-m03_multinode-316400-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-316400 node stop m03                                                                                           | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:37 PDT | 03 Jun 24 05:38 PDT |
	| node    | multinode-316400 node start                                                                                              | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:39 PDT | 03 Jun 24 05:41 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-316400                                                                                                 | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:42 PDT |                     |
	| stop    | -p multinode-316400                                                                                                      | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:42 PDT | 03 Jun 24 05:43 PDT |
	| start   | -p multinode-316400                                                                                                      | multinode-316400 | minikube1\jenkins | v1.33.1 | 03 Jun 24 05:43 PDT |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 05:43:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 05:43:48.816063   10844 out.go:291] Setting OutFile to fd 1460 ...
	I0603 05:43:48.816923   10844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:43:48.816923   10844 out.go:304] Setting ErrFile to fd 1472...
	I0603 05:43:48.816923   10844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:43:48.835388   10844 out.go:298] Setting JSON to false
	I0603 05:43:48.840840   10844 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7856,"bootTime":1717410772,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 05:43:48.840840   10844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 05:43:48.910410   10844 out.go:177] * [multinode-316400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 05:43:48.973379   10844 notify.go:220] Checking for updates...
	I0603 05:43:49.007199   10844 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:43:49.067130   10844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 05:43:49.115725   10844 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 05:43:49.176193   10844 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 05:43:49.191212   10844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 05:43:49.222521   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:43:49.222521   10844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 05:43:54.451501   10844 out.go:177] * Using the hyperv driver based on existing profile
	I0603 05:43:54.523855   10844 start.go:297] selected driver: hyperv
	I0603 05:43:54.523966   10844 start.go:901] validating driver "hyperv" against &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:43:54.524466   10844 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 05:43:54.574263   10844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:43:54.574498   10844 cni.go:84] Creating CNI manager for ""
	I0603 05:43:54.574579   10844 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 05:43:54.574579   10844 start.go:340] cluster config:
	{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.87.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:f
alse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:43:54.575113   10844 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 05:43:54.660619   10844 out.go:177] * Starting "multinode-316400" primary control-plane node in "multinode-316400" cluster
	I0603 05:43:54.697784   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:43:54.703284   10844 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 05:43:54.703284   10844 cache.go:56] Caching tarball of preloaded images
	I0603 05:43:54.703826   10844 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:43:54.704126   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:43:54.704585   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:43:54.707531   10844 start.go:360] acquireMachinesLock for multinode-316400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:43:54.707763   10844 start.go:364] duration metric: took 115.4µs to acquireMachinesLock for "multinode-316400"
	I0603 05:43:54.707996   10844 start.go:96] Skipping create...Using existing machine configuration
	I0603 05:43:54.708102   10844 fix.go:54] fixHost starting: 
	I0603 05:43:54.708760   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:43:57.433627   10844 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:43:57.433627   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:43:57.433764   10844 fix.go:112] recreateIfNeeded on multinode-316400: state=Stopped err=<nil>
	W0603 05:43:57.433764   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 05:43:57.447672   10844 out.go:177] * Restarting existing hyperv VM for "multinode-316400" ...
	I0603 05:43:57.458029   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400
	I0603 05:44:00.557726   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:00.557726   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:00.557726   10844 main.go:141] libmachine: Waiting for host to start...
	I0603 05:44:00.557726   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:02.809771   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:02.809771   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:02.809771   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:05.277634   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:05.277634   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:06.282004   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:08.551271   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:08.551598   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:08.551598   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:11.140571   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:11.140571   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:12.156391   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:14.421680   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:14.421680   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:14.421955   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:16.986756   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:16.986803   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:17.996578   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:20.254690   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:20.254799   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:20.254880   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:22.853590   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:44:22.853590   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:23.860871   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:26.126650   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:26.127700   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:26.127836   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:28.765100   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:28.765310   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:28.768270   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:30.983922   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:30.984596   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:30.984873   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:33.636435   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:33.637359   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:33.637602   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:44:33.640287   10844 machine.go:94] provisionDockerMachine start ...
	I0603 05:44:33.640381   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:35.824890   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:35.825056   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:35.825133   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:38.433997   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:38.433997   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:38.440668   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:44:38.441193   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:44:38.441424   10844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 05:44:38.572796   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 05:44:38.572796   10844 buildroot.go:166] provisioning hostname "multinode-316400"
	I0603 05:44:38.573096   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:40.687886   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:40.688360   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:40.688360   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:43.250914   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:43.251028   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:43.256529   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:44:43.257052   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:44:43.257183   10844 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-316400 && echo "multinode-316400" | sudo tee /etc/hostname
	I0603 05:44:43.409594   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-316400
	
	I0603 05:44:43.409594   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:45.585770   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:45.586666   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:45.586740   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:48.117050   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:48.117251   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:48.122636   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:44:48.123313   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:44:48.123313   10844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-316400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-316400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-316400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 05:44:48.267373   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 05:44:48.267373   10844 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 05:44:48.267373   10844 buildroot.go:174] setting up certificates
	I0603 05:44:48.267373   10844 provision.go:84] configureAuth start
	I0603 05:44:48.267373   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:50.397193   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:50.398194   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:50.398194   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:52.922079   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:52.922828   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:52.922899   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:55.041046   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:55.041046   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:55.041850   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:44:57.607314   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:44:57.607314   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:57.607314   10844 provision.go:143] copyHostCerts
	I0603 05:44:57.607556   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 05:44:57.607628   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 05:44:57.607628   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 05:44:57.608183   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 05:44:57.609499   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 05:44:57.609839   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 05:44:57.609839   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 05:44:57.610232   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 05:44:57.611238   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 05:44:57.611504   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 05:44:57.611504   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 05:44:57.611655   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 05:44:57.612658   10844 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-316400 san=[127.0.0.1 172.17.95.88 localhost minikube multinode-316400]
	I0603 05:44:57.694551   10844 provision.go:177] copyRemoteCerts
	I0603 05:44:57.706699   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 05:44:57.707300   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:44:59.825776   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:44:59.826249   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:44:59.826249   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:02.399629   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:02.399629   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:02.399629   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:02.502175   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7954582s)
	I0603 05:45:02.502175   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 05:45:02.503291   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 05:45:02.548818   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 05:45:02.548910   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 05:45:02.597883   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 05:45:02.598449   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 05:45:02.642864   10844 provision.go:87] duration metric: took 14.3754372s to configureAuth
	I0603 05:45:02.642864   10844 buildroot.go:189] setting minikube options for container-runtime
	I0603 05:45:02.643867   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:45:02.643958   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:04.742801   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:04.742801   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:04.742880   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:07.428026   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:07.428026   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:07.434100   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:07.434348   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:07.434348   10844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 05:45:07.563888   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 05:45:07.563888   10844 buildroot.go:70] root file system type: tmpfs
	I0603 05:45:07.563888   10844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 05:45:07.563888   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:09.755582   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:09.756487   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:09.756487   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:12.303886   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:12.304516   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:12.309939   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:12.310597   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:12.310597   10844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 05:45:12.472332   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 05:45:12.472452   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:14.613050   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:14.613050   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:14.613410   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:17.170955   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:17.171094   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:17.176550   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:17.177233   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:17.177233   10844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 05:45:19.620742   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 05:45:19.620742   10844 machine.go:97] duration metric: took 45.9802558s to provisionDockerMachine
	I0603 05:45:19.620742   10844 start.go:293] postStartSetup for "multinode-316400" (driver="hyperv")
	I0603 05:45:19.620742   10844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 05:45:19.631739   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 05:45:19.632742   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:21.800577   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:21.800717   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:21.800830   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:24.312032   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:24.313038   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:24.313294   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:24.432701   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8009445s)
	I0603 05:45:24.445165   10844 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 05:45:24.454443   10844 command_runner.go:130] > NAME=Buildroot
	I0603 05:45:24.454539   10844 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 05:45:24.454539   10844 command_runner.go:130] > ID=buildroot
	I0603 05:45:24.454539   10844 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 05:45:24.454539   10844 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 05:45:24.454596   10844 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 05:45:24.454596   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 05:45:24.455134   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 05:45:24.456082   10844 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 05:45:24.456143   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 05:45:24.470725   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 05:45:24.490808   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 05:45:24.535234   10844 start.go:296] duration metric: took 4.9144739s for postStartSetup
	I0603 05:45:24.535234   10844 fix.go:56] duration metric: took 1m29.8267995s for fixHost
	I0603 05:45:24.535234   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:26.738491   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:26.738537   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:26.738537   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:29.303844   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:29.304102   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:29.312620   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:29.312838   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:29.312838   10844 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 05:45:29.445596   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418729.453516780
	
	I0603 05:45:29.445596   10844 fix.go:216] guest clock: 1717418729.453516780
	I0603 05:45:29.445596   10844 fix.go:229] Guest: 2024-06-03 05:45:29.45351678 -0700 PDT Remote: 2024-06-03 05:45:24.5352342 -0700 PDT m=+95.805785701 (delta=4.91828258s)
	I0603 05:45:29.445596   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:31.631915   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:31.631915   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:31.632511   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:34.166993   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:34.166993   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:34.171595   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:45:34.172185   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.95.88 22 <nil> <nil>}
	I0603 05:45:34.172185   10844 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717418729
	I0603 05:45:34.309869   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:45:29 UTC 2024
	
	I0603 05:45:34.309934   10844 fix.go:236] clock set: Mon Jun  3 12:45:29 UTC 2024
	 (err=<nil>)
	I0603 05:45:34.310002   10844 start.go:83] releasing machines lock for "multinode-316400", held for 1m39.6017028s
	I0603 05:45:34.310154   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:36.417421   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:36.417421   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:36.418195   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:38.986858   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:38.986858   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:38.991392   10844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 05:45:38.991526   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:39.000728   10844 ssh_runner.go:195] Run: cat /version.json
	I0603 05:45:39.001715   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:45:41.209614   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:41.209614   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:41.209614   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:41.210327   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:45:41.210327   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:41.210327   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:45:43.850751   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:43.850751   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:43.850751   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:43.872394   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:45:43.873137   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:45:43.873261   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:45:43.943745   10844 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 05:45:43.943972   10844 ssh_runner.go:235] Completed: cat /version.json: (4.9420123s)
	I0603 05:45:43.959558   10844 ssh_runner.go:195] Run: systemctl --version
	I0603 05:45:44.015709   10844 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 05:45:44.015709   10844 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0242986s)
	I0603 05:45:44.015830   10844 command_runner.go:130] > systemd 252 (252)
	I0603 05:45:44.015830   10844 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 05:45:44.027814   10844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 05:45:44.036653   10844 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 05:45:44.036653   10844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 05:45:44.048619   10844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 05:45:44.078579   10844 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 05:45:44.078579   10844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 05:45:44.078746   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:45:44.079007   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:45:44.112111   10844 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 05:45:44.124848   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 05:45:44.157147   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 05:45:44.177408   10844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 05:45:44.190131   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 05:45:44.224380   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:45:44.262949   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 05:45:44.295838   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:45:44.332622   10844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 05:45:44.364631   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 05:45:44.395593   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 05:45:44.425337   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 05:45:44.455321   10844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 05:45:44.476664   10844 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 05:45:44.489107   10844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 05:45:44.518337   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:44.712162   10844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 05:45:44.744396   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:45:44.756988   10844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 05:45:44.781124   10844 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 05:45:44.781124   10844 command_runner.go:130] > [Unit]
	I0603 05:45:44.781202   10844 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 05:45:44.781202   10844 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 05:45:44.781202   10844 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 05:45:44.781202   10844 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 05:45:44.781202   10844 command_runner.go:130] > StartLimitBurst=3
	I0603 05:45:44.781202   10844 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 05:45:44.781258   10844 command_runner.go:130] > [Service]
	I0603 05:45:44.781258   10844 command_runner.go:130] > Type=notify
	I0603 05:45:44.781258   10844 command_runner.go:130] > Restart=on-failure
	I0603 05:45:44.781258   10844 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 05:45:44.781258   10844 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 05:45:44.781308   10844 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 05:45:44.781308   10844 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 05:45:44.781308   10844 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 05:45:44.781308   10844 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 05:45:44.781384   10844 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 05:45:44.781384   10844 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 05:45:44.781384   10844 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 05:45:44.781442   10844 command_runner.go:130] > ExecStart=
	I0603 05:45:44.781482   10844 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 05:45:44.781534   10844 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 05:45:44.781556   10844 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 05:45:44.781556   10844 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 05:45:44.781584   10844 command_runner.go:130] > LimitNOFILE=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > LimitNPROC=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > LimitCORE=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 05:45:44.781622   10844 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 05:45:44.781622   10844 command_runner.go:130] > TasksMax=infinity
	I0603 05:45:44.781622   10844 command_runner.go:130] > TimeoutStartSec=0
	I0603 05:45:44.781622   10844 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 05:45:44.781695   10844 command_runner.go:130] > Delegate=yes
	I0603 05:45:44.781695   10844 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 05:45:44.781719   10844 command_runner.go:130] > KillMode=process
	I0603 05:45:44.781748   10844 command_runner.go:130] > [Install]
	I0603 05:45:44.781748   10844 command_runner.go:130] > WantedBy=multi-user.target
	I0603 05:45:44.795062   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:45:44.825265   10844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 05:45:44.860097   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:45:44.892930   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:45:44.929529   10844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 05:45:44.999676   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:45:45.022637   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:45:45.057391   10844 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 05:45:45.068376   10844 ssh_runner.go:195] Run: which cri-dockerd
	I0603 05:45:45.074412   10844 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 05:45:45.085379   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 05:45:45.103812   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 05:45:45.145743   10844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 05:45:45.367351   10844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 05:45:45.559233   10844 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 05:45:45.559541   10844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 05:45:45.603824   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:45.797277   10844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:45:48.437479   10844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6401915s)
	I0603 05:45:48.451204   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 05:45:48.483204   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:45:48.517357   10844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 05:45:48.733337   10844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 05:45:48.937108   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:49.146158   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 05:45:49.188509   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:45:49.224547   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:49.417865   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 05:45:49.526417   10844 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 05:45:49.537714   10844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 05:45:49.547080   10844 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 05:45:49.547214   10844 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 05:45:49.547214   10844 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0603 05:45:49.547214   10844 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 05:45:49.547214   10844 command_runner.go:130] > Access: 2024-06-03 12:45:49.452283219 +0000
	I0603 05:45:49.547214   10844 command_runner.go:130] > Modify: 2024-06-03 12:45:49.452283219 +0000
	I0603 05:45:49.547214   10844 command_runner.go:130] > Change: 2024-06-03 12:45:49.457283264 +0000
	I0603 05:45:49.547214   10844 command_runner.go:130] >  Birth: -
	I0603 05:45:49.547403   10844 start.go:562] Will wait 60s for crictl version
	I0603 05:45:49.560071   10844 ssh_runner.go:195] Run: which crictl
	I0603 05:45:49.565489   10844 command_runner.go:130] > /usr/bin/crictl
	I0603 05:45:49.576897   10844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 05:45:49.629626   10844 command_runner.go:130] > Version:  0.1.0
	I0603 05:45:49.630513   10844 command_runner.go:130] > RuntimeName:  docker
	I0603 05:45:49.630513   10844 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 05:45:49.630513   10844 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 05:45:49.630513   10844 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 05:45:49.639893   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:45:49.670938   10844 command_runner.go:130] > 26.0.2
	I0603 05:45:49.682613   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:45:49.711808   10844 command_runner.go:130] > 26.0.2
	I0603 05:45:49.717677   10844 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 05:45:49.717865   10844 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 05:45:49.722243   10844 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 05:45:49.724868   10844 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 05:45:49.724868   10844 ip.go:210] interface addr: 172.17.80.1/20
	I0603 05:45:49.740250   10844 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 05:45:49.747348   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:45:49.774754   10844 kubeadm.go:877] updating cluster {Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 05:45:49.775093   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:45:49.784947   10844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 05:45:49.814591   10844 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 05:45:49.815570   10844 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 05:45:49.815570   10844 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 05:45:49.815570   10844 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 05:45:49.815771   10844 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 05:45:49.815771   10844 docker.go:615] Images already preloaded, skipping extraction
	I0603 05:45:49.825761   10844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 05:45:49.848282   10844 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 05:45:49.848432   10844 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 05:45:49.848481   10844 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 05:45:49.848481   10844 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 05:45:49.848481   10844 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 05:45:49.848481   10844 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 05:45:49.848481   10844 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 05:45:49.848589   10844 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 05:45:49.848644   10844 cache_images.go:84] Images are preloaded, skipping loading
	I0603 05:45:49.848644   10844 kubeadm.go:928] updating node { 172.17.95.88 8443 v1.30.1 docker true true} ...
	I0603 05:45:49.848948   10844 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-316400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.95.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 05:45:49.858246   10844 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 05:45:49.892506   10844 command_runner.go:130] > cgroupfs
	I0603 05:45:49.893814   10844 cni.go:84] Creating CNI manager for ""
	I0603 05:45:49.893814   10844 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 05:45:49.893814   10844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 05:45:49.893905   10844 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.95.88 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-316400 NodeName:multinode-316400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.95.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.95.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 05:45:49.894199   10844 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.95.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-316400"
	  kubeletExtraArgs:
	    node-ip: 172.17.95.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.95.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 05:45:49.906839   10844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 05:45:49.926616   10844 command_runner.go:130] > kubeadm
	I0603 05:45:49.926616   10844 command_runner.go:130] > kubectl
	I0603 05:45:49.926616   10844 command_runner.go:130] > kubelet
	I0603 05:45:49.926616   10844 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 05:45:49.938257   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 05:45:49.958114   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0603 05:45:49.992902   10844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 05:45:50.023256   10844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0603 05:45:50.067301   10844 ssh_runner.go:195] Run: grep 172.17.95.88	control-plane.minikube.internal$ /etc/hosts
	I0603 05:45:50.073480   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:45:50.111809   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:45:50.312147   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:45:50.346041   10844 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400 for IP: 172.17.95.88
	I0603 05:45:50.346041   10844 certs.go:194] generating shared ca certs ...
	I0603 05:45:50.346160   10844 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:50.346878   10844 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 05:45:50.347284   10844 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 05:45:50.347496   10844 certs.go:256] generating profile certs ...
	I0603 05:45:50.348108   10844 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\client.key
	I0603 05:45:50.348222   10844 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17
	I0603 05:45:50.348417   10844 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.88]
	I0603 05:45:50.539063   10844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17 ...
	I0603 05:45:50.539063   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17: {Name:mk5be6417b01220b39e4973282b711a048fd41b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:50.540501   10844 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17 ...
	I0603 05:45:50.540501   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17: {Name:mkc2845c79a22602a493821a7a6efafb1bd00853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:50.541382   10844 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt.57b1ef17 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt
	I0603 05:45:50.557330   10844 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key.57b1ef17 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key
	I0603 05:45:50.558417   10844 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key
	I0603 05:45:50.558417   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 05:45:50.559495   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 05:45:50.559682   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 05:45:50.559738   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 05:45:50.560058   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 05:45:50.560354   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 05:45:50.561050   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 05:45:50.561050   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 05:45:50.562144   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 05:45:50.562670   10844 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 05:45:50.562916   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 05:45:50.563324   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 05:45:50.563684   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 05:45:50.564158   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 05:45:50.564899   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 05:45:50.565227   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:50.565496   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 05:45:50.565756   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 05:45:50.567324   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 05:45:50.616973   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 05:45:50.668387   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 05:45:50.714540   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 05:45:50.758039   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 05:45:50.806066   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 05:45:50.853517   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 05:45:50.901582   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 05:45:50.947781   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 05:45:50.992386   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 05:45:51.037838   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 05:45:51.080332   10844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 05:45:51.123114   10844 ssh_runner.go:195] Run: openssl version
	I0603 05:45:51.132669   10844 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 05:45:51.144276   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 05:45:51.176695   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.183161   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.183684   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.199231   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:45:51.206773   10844 command_runner.go:130] > b5213941
	I0603 05:45:51.217222   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 05:45:51.248588   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 05:45:51.280201   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.288208   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.288208   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.299210   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 05:45:51.310099   10844 command_runner.go:130] > 51391683
	I0603 05:45:51.322261   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 05:45:51.352751   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 05:45:51.385525   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.394018   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.394018   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.406322   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 05:45:51.415944   10844 command_runner.go:130] > 3ec20f2e
	I0603 05:45:51.427945   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 05:45:51.461359   10844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:45:51.469161   10844 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:45:51.469161   10844 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 05:45:51.469161   10844 command_runner.go:130] > Device: 8,1	Inode: 4196168     Links: 1
	I0603 05:45:51.469161   10844 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 05:45:51.469161   10844 command_runner.go:130] > Access: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.469161   10844 command_runner.go:130] > Modify: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.469161   10844 command_runner.go:130] > Change: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.469161   10844 command_runner.go:130] >  Birth: 2024-06-03 12:22:52.928226117 +0000
	I0603 05:45:51.480677   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 05:45:51.490675   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.501675   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 05:45:51.510483   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.521820   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 05:45:51.530900   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.542424   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 05:45:51.553093   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.563438   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 05:45:51.571964   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.583661   10844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 05:45:51.593031   10844 command_runner.go:130] > Certificate will not expire
	I0603 05:45:51.593417   10844 kubeadm.go:391] StartCluster: {Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.94.201 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:45:51.603534   10844 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 05:45:51.637170   10844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 05:45:51.658628   10844 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0603 05:45:51.658628   10844 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0603 05:45:51.658628   10844 command_runner.go:130] > /var/lib/minikube/etcd:
	I0603 05:45:51.658628   10844 command_runner.go:130] > member
	W0603 05:45:51.658734   10844 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 05:45:51.658734   10844 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 05:45:51.658734   10844 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 05:45:51.670760   10844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 05:45:51.688309   10844 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 05:45:51.689593   10844 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-316400" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:45:51.690116   10844 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-316400" cluster setting kubeconfig missing "multinode-316400" context setting]
	I0603 05:45:51.691193   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:45:51.705622   10844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:45:51.707044   10844 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.95.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:45:51.707880   10844 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 05:45:51.720613   10844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 05:45:51.740283   10844 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0603 05:45:51.740328   10844 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0603 05:45:51.740328   10844 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0603 05:45:51.740328   10844 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 05:45:51.740328   10844 command_runner.go:130] >  kind: InitConfiguration
	I0603 05:45:51.740328   10844 command_runner.go:130] >  localAPIEndpoint:
	I0603 05:45:51.740328   10844 command_runner.go:130] > -  advertiseAddress: 172.17.87.47
	I0603 05:45:51.740328   10844 command_runner.go:130] > +  advertiseAddress: 172.17.95.88
	I0603 05:45:51.740328   10844 command_runner.go:130] >    bindPort: 8443
	I0603 05:45:51.740328   10844 command_runner.go:130] >  bootstrapTokens:
	I0603 05:45:51.740328   10844 command_runner.go:130] >    - groups:
	I0603 05:45:51.740328   10844 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0603 05:45:51.740328   10844 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0603 05:45:51.740328   10844 command_runner.go:130] >    name: "multinode-316400"
	I0603 05:45:51.740328   10844 command_runner.go:130] >    kubeletExtraArgs:
	I0603 05:45:51.740328   10844 command_runner.go:130] > -    node-ip: 172.17.87.47
	I0603 05:45:51.740328   10844 command_runner.go:130] > +    node-ip: 172.17.95.88
	I0603 05:45:51.740328   10844 command_runner.go:130] >    taints: []
	I0603 05:45:51.740328   10844 command_runner.go:130] >  ---
	I0603 05:45:51.740328   10844 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 05:45:51.740328   10844 command_runner.go:130] >  kind: ClusterConfiguration
	I0603 05:45:51.740328   10844 command_runner.go:130] >  apiServer:
	I0603 05:45:51.740328   10844 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.87.47"]
	I0603 05:45:51.740328   10844 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.95.88"]
	I0603 05:45:51.740328   10844 command_runner.go:130] >    extraArgs:
	I0603 05:45:51.740328   10844 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0603 05:45:51.740328   10844 command_runner.go:130] >  controllerManager:
	I0603 05:45:51.740328   10844 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.87.47
	+  advertiseAddress: 172.17.95.88
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-316400"
	   kubeletExtraArgs:
	-    node-ip: 172.17.87.47
	+    node-ip: 172.17.95.88
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.87.47"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.95.88"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0603 05:45:51.740328   10844 kubeadm.go:1154] stopping kube-system containers ...
	I0603 05:45:51.749053   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 05:45:51.785016   10844 command_runner.go:130] > 8280b3904678
	I0603 05:45:51.785016   10844 command_runner.go:130] > f3d3a474bbe6
	I0603 05:45:51.785016   10844 command_runner.go:130] > 4956a24c17e7
	I0603 05:45:51.785016   10844 command_runner.go:130] > d4b4a69fc5b7
	I0603 05:45:51.785016   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:45:51.785016   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:45:51.785016   10844 command_runner.go:130] > 53f366fa802e
	I0603 05:45:51.785016   10844 command_runner.go:130] > 0ab8fbb688df
	I0603 05:45:51.785016   10844 command_runner.go:130] > 29c39ff8468f
	I0603 05:45:51.785016   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:45:51.785016   10844 command_runner.go:130] > 8c884e5bfb96
	I0603 05:45:51.785016   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:45:51.785016   10844 command_runner.go:130] > a24225992b63
	I0603 05:45:51.785016   10844 command_runner.go:130] > bf22fe666154
	I0603 05:45:51.785016   10844 command_runner.go:130] > 77f0d5d979f8
	I0603 05:45:51.785016   10844 command_runner.go:130] > 10b8b906c7ec
	I0603 05:45:51.785016   10844 docker.go:483] Stopping containers: [8280b3904678 f3d3a474bbe6 4956a24c17e7 d4b4a69fc5b7 a00a9dc2a937 ad08c7b8f3af 53f366fa802e 0ab8fbb688df 29c39ff8468f f39be6db7a1f 8c884e5bfb96 3d7dc29a5791 a24225992b63 bf22fe666154 77f0d5d979f8 10b8b906c7ec]
	I0603 05:45:51.794381   10844 ssh_runner.go:195] Run: docker stop 8280b3904678 f3d3a474bbe6 4956a24c17e7 d4b4a69fc5b7 a00a9dc2a937 ad08c7b8f3af 53f366fa802e 0ab8fbb688df 29c39ff8468f f39be6db7a1f 8c884e5bfb96 3d7dc29a5791 a24225992b63 bf22fe666154 77f0d5d979f8 10b8b906c7ec
	I0603 05:45:51.827531   10844 command_runner.go:130] > 8280b3904678
	I0603 05:45:51.828280   10844 command_runner.go:130] > f3d3a474bbe6
	I0603 05:45:51.828280   10844 command_runner.go:130] > 4956a24c17e7
	I0603 05:45:51.828280   10844 command_runner.go:130] > d4b4a69fc5b7
	I0603 05:45:51.828280   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:45:51.828280   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:45:51.828280   10844 command_runner.go:130] > 53f366fa802e
	I0603 05:45:51.828280   10844 command_runner.go:130] > 0ab8fbb688df
	I0603 05:45:51.828280   10844 command_runner.go:130] > 29c39ff8468f
	I0603 05:45:51.828280   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:45:51.828280   10844 command_runner.go:130] > 8c884e5bfb96
	I0603 05:45:51.828280   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:45:51.828280   10844 command_runner.go:130] > a24225992b63
	I0603 05:45:51.828418   10844 command_runner.go:130] > bf22fe666154
	I0603 05:45:51.828418   10844 command_runner.go:130] > 77f0d5d979f8
	I0603 05:45:51.828418   10844 command_runner.go:130] > 10b8b906c7ec
	I0603 05:45:51.840536   10844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 05:45:51.880369   10844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0603 05:45:51.899992   10844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 05:45:51.899992   10844 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 05:45:51.899992   10844 kubeadm.go:156] found existing configuration files:
	
	I0603 05:45:51.912770   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 05:45:51.929630   10844 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 05:45:51.930696   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 05:45:51.943454   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 05:45:51.974548   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 05:45:51.992275   10844 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 05:45:51.992973   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 05:45:52.007941   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 05:45:52.037288   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 05:45:52.055441   10844 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 05:45:52.056030   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 05:45:52.069087   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 05:45:52.102530   10844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 05:45:52.122535   10844 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 05:45:52.122535   10844 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 05:45:52.133517   10844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 05:45:52.162517   10844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 05:45:52.181471   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 05:45:52.462267   10844 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0603 05:45:52.462420   10844 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 05:45:52.462567   10844 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 05:45:52.462567   10844 command_runner.go:130] > [certs] Using the existing "sa" key
	I0603 05:45:52.462567   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.286276   10844 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 05:45:54.286276   10844 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 05:45:54.286276   10844 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 05:45:54.286388   10844 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 05:45:54.286388   10844 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 05:45:54.286388   10844 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 05:45:54.286423   10844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.8237674s)
	I0603 05:45:54.286423   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.598569   10844 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:45:54.598909   10844 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:45:54.598909   10844 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 05:45:54.599126   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.706106   10844 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 05:45:54.706179   10844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 05:45:54.706218   10844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 05:45:54.706218   10844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 05:45:54.706218   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:45:54.810667   10844 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 05:45:54.810977   10844 api_server.go:52] waiting for apiserver process to appear ...
	I0603 05:45:54.823668   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:55.325904   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:55.836774   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:56.332919   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:56.837488   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:45:56.862164   10844 command_runner.go:130] > 1862
	I0603 05:45:56.862164   10844 api_server.go:72] duration metric: took 2.0512122s to wait for apiserver process to appear ...
	I0603 05:45:56.862164   10844 api_server.go:88] waiting for apiserver healthz status ...
	I0603 05:45:56.862164   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.344153   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 05:46:00.344153   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 05:46:00.344153   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.501412   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:00.501412   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:00.501412   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.513517   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:00.513517   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:00.868650   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:00.876085   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:00.876085   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:01.373507   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:01.384528   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 05:46:01.384528   10844 api_server.go:103] status: https://172.17.95.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 05:46:01.862832   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:46:01.870640   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 200:
	ok
	I0603 05:46:01.871403   10844 round_trippers.go:463] GET https://172.17.95.88:8443/version
	I0603 05:46:01.871403   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:01.871403   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:01.871403   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:01.881771   10844 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 05:46:01.881771   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:01.881771   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Content-Length: 263
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:01 GMT
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Audit-Id: a5bab7d6-bece-41de-960c-f7ef97b8b6e4
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:01.881771   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:01.881771   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:01.881771   10844 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 05:46:01.881771   10844 api_server.go:141] control plane version: v1.30.1
	I0603 05:46:01.881771   10844 api_server.go:131] duration metric: took 5.0195889s to wait for apiserver health ...
	I0603 05:46:01.881771   10844 cni.go:84] Creating CNI manager for ""
	I0603 05:46:01.881771   10844 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 05:46:01.891146   10844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 05:46:01.910851   10844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 05:46:01.918344   10844 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0603 05:46:01.918344   10844 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0603 05:46:01.918415   10844 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0603 05:46:01.918415   10844 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 05:46:01.918415   10844 command_runner.go:130] > Access: 2024-06-03 12:44:25.864397100 +0000
	I0603 05:46:01.918415   10844 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0603 05:46:01.918415   10844 command_runner.go:130] > Change: 2024-06-03 12:44:13.868000000 +0000
	I0603 05:46:01.918497   10844 command_runner.go:130] >  Birth: -
	I0603 05:46:01.918497   10844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 05:46:01.918497   10844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 05:46:02.035951   10844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 05:46:03.149554   10844 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0603 05:46:03.149708   10844 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0603 05:46:03.149708   10844 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0603 05:46:03.149708   10844 command_runner.go:130] > daemonset.apps/kindnet configured
	I0603 05:46:03.149708   10844 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1137532s)
	I0603 05:46:03.149708   10844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 05:46:03.149708   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:46:03.149708   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.149708   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.149708   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.159576   10844 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:46:03.159576   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.159576   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.159576   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Audit-Id: 6654eba0-33f3-43a7-9055-36db84aa15f8
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.159576   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.162263   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85352 chars]
	I0603 05:46:03.168732   10844 system_pods.go:59] 12 kube-system pods found
	I0603 05:46:03.168732   10844 system_pods.go:61] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 05:46:03.168732   10844 system_pods.go:61] "etcd-multinode-316400" [8509d96a-4449-4656-8237-d194d2980506] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kindnet-2g66r" [3e88e85f-e61e-427f-944a-97b0ba90d219] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kindnet-789v5" [d3147209-4266-4963-a4a6-05a024412c7b] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-apiserver-multinode-316400" [1c07a75f-fb00-4529-a699-378974ce494b] Pending
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-proxy-dl97g" [78665ab7-c6dd-4381-8b29-75df4d31eff1] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-proxy-z26hc" [983da576-c697-4bdd-8908-93ec5b571787] Running
	I0603 05:46:03.168732   10844 system_pods.go:61] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 05:46:03.168732   10844 system_pods.go:61] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 05:46:03.168732   10844 system_pods.go:74] duration metric: took 19.0235ms to wait for pod list to return data ...
	I0603 05:46:03.168732   10844 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:46:03.168732   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes
	I0603 05:46:03.168732   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.168732   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.168732   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.174802   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:03.174802   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.174802   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.174802   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Audit-Id: 9cfdf364-5833-4bf2-93d4-ada17267ae46
	I0603 05:46:03.174802   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.174802   10844 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1748"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15626 chars]
	I0603 05:46:03.177147   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:46:03.177197   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:46:03.177242   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:46:03.177242   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:46:03.177278   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:46:03.177278   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:46:03.177278   10844 node_conditions.go:105] duration metric: took 8.5464ms to run NodePressure ...
	I0603 05:46:03.177319   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 05:46:03.600558   10844 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 05:46:03.600558   10844 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 05:46:03.600642   10844 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 05:46:03.600642   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0603 05:46:03.600642   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.600642   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.600642   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.604401   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:03.604401   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.604401   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.604401   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Audit-Id: 41adc2f2-1d4b-4f2d-b4ba-0f9dc7981541
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.604401   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.605937   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1754"},"items":[{"metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1736","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30501 chars]
	I0603 05:46:03.607356   10844 kubeadm.go:733] kubelet initialised
	I0603 05:46:03.607356   10844 kubeadm.go:734] duration metric: took 6.7139ms waiting for restarted kubelet to initialise ...
	I0603 05:46:03.607356   10844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:46:03.607356   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:46:03.607356   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.607356   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.607356   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.616366   10844 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:46:03.616366   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.616366   10844 round_trippers.go:580]     Audit-Id: 711df3df-3d4b-44bd-959b-438fd3cb4bdc
	I0603 05:46:03.616366   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.617383   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.617383   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.617383   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.617383   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.619159   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1754"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87069 chars]
	I0603 05:46:03.622766   10844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.622766   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:03.622766   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.622766   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.622766   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.625427   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.625836   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.625836   10844 round_trippers.go:580]     Audit-Id: 97b4e11c-3bfe-4a29-9bec-867b105c6afa
	I0603 05:46:03.625836   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.625836   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.625836   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.625836   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.625894   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.626050   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:03.626834   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.626834   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.626834   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.626906   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.628933   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.629290   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.629290   10844 round_trippers.go:580]     Audit-Id: e75c020c-40ff-433c-b1ab-e6227fca65f3
	I0603 05:46:03.629290   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.629290   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.629290   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.629363   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.629363   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.629789   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.629936   10844 pod_ready.go:97] node "multinode-316400" hosting pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.629936   10844 pod_ready.go:81] duration metric: took 7.1697ms for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.629936   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.629936   10844 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.629936   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:46:03.630474   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.630474   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.630474   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.634532   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:03.635012   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Audit-Id: 46d58ecb-4e01-412e-b1a4-f4d76d3d2558
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.635012   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.635012   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.635012   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.635287   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1736","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6373 chars]
	I0603 05:46:03.635833   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.635897   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.635897   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.635897   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.638534   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.638953   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.638953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.638953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Audit-Id: 76e0d4d8-6f8b-49fb-961b-e456964ba094
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.638953   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.639112   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.639823   10844 pod_ready.go:97] node "multinode-316400" hosting pod "etcd-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.639823   10844 pod_ready.go:81] duration metric: took 9.8864ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.639823   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "etcd-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.639823   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.640061   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:46:03.640085   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.640112   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.640112   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.643083   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.643083   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.643083   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.643174   10844 round_trippers.go:580]     Audit-Id: 8a87c0f9-f18b-477a-a83d-81e5ef4078a6
	I0603 05:46:03.643174   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.643174   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.643174   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.643174   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.643263   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"1c07a75f-fb00-4529-a699-378974ce494b","resourceVersion":"1749","creationTimestamp":"2024-06-03T12:46:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.95.88:8443","kubernetes.io/config.hash":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.mirror":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.seen":"2024-06-03T12:45:54.794125775Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7929 chars]
	I0603 05:46:03.644003   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.644003   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.644003   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.644003   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.646708   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.646708   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.646708   10844 round_trippers.go:580]     Audit-Id: 4d2547f7-17af-4a3e-8365-c026b24030fb
	I0603 05:46:03.647156   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.647156   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.647156   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.647156   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.647156   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.647373   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.647569   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-apiserver-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.647569   10844 pod_ready.go:81] duration metric: took 7.65ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.647569   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-apiserver-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.647569   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.647569   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:46:03.647569   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.647569   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.647569   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.650340   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:03.650340   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.650340   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.650340   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Audit-Id: f16c7884-0a9c-4f8e-9b8b-ab886bcc7161
	I0603 05:46:03.650340   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.650340   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"1732","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0603 05:46:03.652028   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:03.652028   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.652028   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.652028   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.657730   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:03.657730   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.657802   10844 round_trippers.go:580]     Audit-Id: 656a53b8-0eb5-4880-9f75-21747b13027c
	I0603 05:46:03.657802   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.657833   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.657833   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.657868   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.657868   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.657900   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:03.658742   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-controller-manager-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.658810   10844 pod_ready.go:81] duration metric: took 11.2402ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:03.658860   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-controller-manager-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:03.658860   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:03.806608   10844 request.go:629] Waited for 147.5072ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:46:03.806893   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:46:03.806893   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:03.806893   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:03.806893   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:03.812233   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:03.812233   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:03.812483   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:03 GMT
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Audit-Id: 3da3227d-8c65-448c-bf45-e5b417278c40
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:03.812483   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:03.812483   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:03.812602   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dl97g","generateName":"kube-proxy-","namespace":"kube-system","uid":"78665ab7-c6dd-4381-8b29-75df4d31eff1","resourceVersion":"1713","creationTimestamp":"2024-06-03T12:30:58Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:30:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0603 05:46:04.008741   10844 request.go:629] Waited for 195.2613ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:46:04.009107   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:46:04.009107   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.009107   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.009107   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.013465   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:04.014327   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.014327   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.014327   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.014327   10844 round_trippers.go:580]     Audit-Id: 52675769-063f-4a47-a5cb-51e5e80a6124
	I0603 05:46:04.014619   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m03","uid":"39dbcb4e-fdeb-4463-8bde-9cfa6cead308","resourceVersion":"1720","creationTimestamp":"2024-06-03T12:41:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_41_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:41:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0603 05:46:04.014619   10844 pod_ready.go:97] node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:46:04.015172   10844 pod_ready.go:81] duration metric: took 356.2769ms for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:04.015172   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:46:04.015172   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:04.210170   10844 request.go:629] Waited for 194.5553ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:46:04.210289   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:46:04.210289   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.210289   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.210289   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.213666   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:04.214308   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.214308   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.214308   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.214419   10844 round_trippers.go:580]     Audit-Id: 1ec071ec-56bb-4634-81ca-b3fe83687730
	I0603 05:46:04.214419   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.214419   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.214419   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.215978   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"1752","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0603 05:46:04.413378   10844 request.go:629] Waited for 196.3724ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:04.413597   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:04.413597   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.413597   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.413597   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.419302   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:04.419302   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.419302   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.419302   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.419302   10844 round_trippers.go:580]     Audit-Id: 2f7c0a3a-f297-4f31-b59c-6c07514a7363
	I0603 05:46:04.419866   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:04.420232   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-proxy-ks64x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:04.420232   10844 pod_ready.go:81] duration metric: took 405.0585ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:04.420232   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-proxy-ks64x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:04.420232   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:04.601586   10844 request.go:629] Waited for 181.1549ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:46:04.601844   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:46:04.601844   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.601844   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.601844   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.606085   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:04.606085   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.606085   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.606085   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.606085   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.606085   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.606085   10844 round_trippers.go:580]     Audit-Id: c2d24a1b-1652-4f33-8a8b-3ecfd4337c26
	I0603 05:46:04.606167   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.606465   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"609","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0603 05:46:04.805770   10844 request.go:629] Waited for 198.2626ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:46:04.806179   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:46:04.806179   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:04.806179   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:04.806179   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:04.809996   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:04.809996   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:04.809996   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:04.809996   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:04.810193   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:04.810193   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:04 GMT
	I0603 05:46:04.810193   10844 round_trippers.go:580]     Audit-Id: 67ca5c7b-a8de-4ab8-b6ca-57a125a2f43b
	I0603 05:46:04.810193   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:04.810398   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"1676","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3826 chars]
	I0603 05:46:04.810499   10844 pod_ready.go:92] pod "kube-proxy-z26hc" in "kube-system" namespace has status "Ready":"True"
	I0603 05:46:04.810499   10844 pod_ready.go:81] duration metric: took 390.2665ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:04.810499   10844 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:05.009623   10844 request.go:629] Waited for 198.1488ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:46:05.009823   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:46:05.009823   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:05.009823   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:05.009885   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:05.013633   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:05.013633   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:05.013980   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:05 GMT
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Audit-Id: f7be52fb-b8db-435d-8c0c-5fb7106ea4da
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:05.013980   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:05.013980   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:05.014213   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"1734","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0603 05:46:05.214723   10844 request.go:629] Waited for 199.4584ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:05.214932   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:05.214932   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:05.214932   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:05.214932   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:05.219400   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:05.219400   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:05 GMT
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Audit-Id: 8d2759a0-d182-4caf-8eec-cbe277482d91
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:05.219400   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:05.219400   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:05.219400   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:05.219400   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:05.220353   10844 pod_ready.go:97] node "multinode-316400" hosting pod "kube-scheduler-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:05.220414   10844 pod_ready.go:81] duration metric: took 409.9133ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	E0603 05:46:05.220414   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400" hosting pod "kube-scheduler-multinode-316400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400" has status "Ready":"False"
	I0603 05:46:05.220414   10844 pod_ready.go:38] duration metric: took 1.6130522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:46:05.220474   10844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 05:46:05.242018   10844 command_runner.go:130] > -16
	I0603 05:46:05.242109   10844 ops.go:34] apiserver oom_adj: -16
	I0603 05:46:05.242109   10844 kubeadm.go:591] duration metric: took 13.583325s to restartPrimaryControlPlane
	I0603 05:46:05.242109   10844 kubeadm.go:393] duration metric: took 13.6486418s to StartCluster
	I0603 05:46:05.242109   10844 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:46:05.242109   10844 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:46:05.243914   10844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:46:05.245415   10844 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 05:46:05.248790   10844 out.go:177] * Verifying Kubernetes components...
	I0603 05:46:05.245587   10844 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 05:46:05.245697   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:46:05.256738   10844 out.go:177] * Enabled addons: 
	I0603 05:46:05.259080   10844 addons.go:510] duration metric: took 13.4927ms for enable addons: enabled=[]
	I0603 05:46:05.267034   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:46:05.532765   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:46:05.562711   10844 node_ready.go:35] waiting up to 6m0s for node "multinode-316400" to be "Ready" ...
	I0603 05:46:05.562796   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:05.562796   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:05.562796   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:05.562796   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:05.567381   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:05.567381   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Audit-Id: 7e2a5c7f-e003-4914-9d7d-581639571f34
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:05.567381   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:05.567381   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:05.567381   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:05 GMT
	I0603 05:46:05.567381   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:06.074643   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:06.074692   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:06.074692   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:06.074692   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:06.079283   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:06.079345   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:06 GMT
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Audit-Id: 4adcb52e-20ee-4162-8284-a92b99c18ab2
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:06.079345   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:06.079345   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:06.079345   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:06.080318   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:06.577330   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:06.577330   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:06.577330   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:06.577330   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:06.584367   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:06.584367   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:06.584447   10844 round_trippers.go:580]     Audit-Id: 3b299e94-176f-4180-a779-18102d14fe10
	I0603 05:46:06.584465   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:06.584465   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:06.584465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:06.584465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:06.584492   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:06 GMT
	I0603 05:46:06.584492   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:07.068441   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:07.068517   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:07.068517   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:07.068517   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:07.073023   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:07.073023   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Audit-Id: 4c5fb513-144f-4dd5-8552-478d817d21b4
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:07.073023   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:07.073023   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:07.073023   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:07 GMT
	I0603 05:46:07.074000   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:07.577364   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:07.577428   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:07.577428   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:07.577428   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:07.581963   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:07.581963   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:07.581963   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:07.581963   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:07.581963   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:07.581963   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:07 GMT
	I0603 05:46:07.581963   10844 round_trippers.go:580]     Audit-Id: e4e16eaf-1f14-4ab9-9d35-3ffe7e0bd927
	I0603 05:46:07.582637   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:07.582817   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:07.583131   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:08.078773   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:08.078773   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:08.078887   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:08.078887   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:08.082818   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:08.083175   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:08.083175   10844 round_trippers.go:580]     Audit-Id: 05c7671c-7cd1-46c5-a164-e25a1f5c631e
	I0603 05:46:08.083175   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:08.083175   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:08.083265   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:08.083265   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:08.083265   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:08 GMT
	I0603 05:46:08.083772   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:08.576841   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:08.576916   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:08.576916   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:08.576916   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:08.581206   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:08.581652   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:08.581652   10844 round_trippers.go:580]     Audit-Id: b4edd00e-0f89-4f66-8e3e-fc74abc2604d
	I0603 05:46:08.581652   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:08.581652   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:08.581652   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:08.581652   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:08.581752   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:08 GMT
	I0603 05:46:08.581999   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:09.071957   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:09.071957   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:09.071957   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:09.071957   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:09.074540   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:09.074540   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:09.075469   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:09.075469   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:09.075469   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:09.075469   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:09.075589   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:09 GMT
	I0603 05:46:09.075589   10844 round_trippers.go:580]     Audit-Id: 8fc7f7bf-3b36-4c58-b6a1-661a52e71393
	I0603 05:46:09.076023   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:09.573744   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:09.573828   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:09.573828   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:09.573914   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:09.578011   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:09.578011   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:09.578101   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:09.578101   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:09 GMT
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Audit-Id: 2be9aa31-65a2-4968-ad39-ac28e016d90f
	I0603 05:46:09.578101   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:09.578301   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:10.071366   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:10.071563   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:10.071563   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:10.071563   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:10.083357   10844 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0603 05:46:10.083791   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:10.083791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:10.083791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:10 GMT
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Audit-Id: b3c237d8-b16d-48b5-9a3d-47a314a0aa94
	I0603 05:46:10.083791   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:10.083989   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:10.084704   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:10.570200   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:10.570317   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:10.570317   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:10.570317   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:10.574521   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:10.574521   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:10.574521   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:10.574521   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:10 GMT
	I0603 05:46:10.574521   10844 round_trippers.go:580]     Audit-Id: 47a858a9-2baf-4a00-82b8-953bf127f2b7
	I0603 05:46:10.574521   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:11.070062   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:11.070062   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:11.070062   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:11.070062   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:11.075195   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:11.075195   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:11.075195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:11 GMT
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Audit-Id: 9b176999-6cab-496c-97a5-f1d75bd80f83
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:11.075195   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:11.075195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:11.075195   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:11.569387   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:11.569387   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:11.569387   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:11.569387   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:11.572978   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:11.573840   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:11.573840   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:11 GMT
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Audit-Id: 6b287fc1-376a-4e53-87a5-a686649f32ba
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:11.573840   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:11.573840   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:11.574061   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:12.066027   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:12.066371   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:12.066371   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:12.066371   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:12.069983   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:12.070161   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:12.070161   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:12.070161   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:12 GMT
	I0603 05:46:12.070161   10844 round_trippers.go:580]     Audit-Id: e5a460a5-afb2-42be-b8e3-7e1a20f7f7da
	I0603 05:46:12.070335   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:12.569196   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:12.569196   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:12.569524   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:12.569524   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:12.572881   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:12.572881   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:12.572881   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:12.572881   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:12 GMT
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Audit-Id: d74734b3-e0c6-45c0-94f5-002662ec6e85
	I0603 05:46:12.572881   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:12.572881   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1729","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0603 05:46:12.574486   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:13.079064   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:13.079064   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:13.079064   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:13.079064   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:13.082104   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:13.082104   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:13.082104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:13.082104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:13 GMT
	I0603 05:46:13.082104   10844 round_trippers.go:580]     Audit-Id: 107bf1cd-327a-4245-b5af-779380b9e0f4
	I0603 05:46:13.082104   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1840","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0603 05:46:13.568103   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:13.568103   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:13.568103   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:13.568103   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:13.571672   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:13.571913   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Audit-Id: f71c7a58-c235-49bb-b897-30b32d67dd2f
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:13.571913   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:13.571913   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:13.571913   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:13 GMT
	I0603 05:46:13.572032   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:14.070100   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:14.070100   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:14.070256   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:14.070256   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:14.075036   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:14.075036   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:14.075036   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:14.075036   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:14 GMT
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Audit-Id: 6b22128b-93df-425c-b69c-83ccba85229b
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:14.075142   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:14.075695   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:14.570114   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:14.570189   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:14.570189   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:14.570298   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:14.574290   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:14.574290   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:14.574290   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:14.574290   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:14.574290   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:14.574290   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:14 GMT
	I0603 05:46:14.574290   10844 round_trippers.go:580]     Audit-Id: ac83ad67-45a3-4df0-8ed1-78c3cf0d1193
	I0603 05:46:14.575133   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:14.576346   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:14.577079   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:15.070465   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:15.070465   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:15.070465   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:15.070465   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:15.075081   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:15.075159   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:15.075159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:15.075159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:15.075159   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:15 GMT
	I0603 05:46:15.075249   10844 round_trippers.go:580]     Audit-Id: 0e6f0b3f-3e1c-479f-a577-2e66f78bce92
	I0603 05:46:15.075249   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:15.075249   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:15.076042   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:15.571590   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:15.571590   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:15.571590   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:15.571590   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:15.576154   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:15.576569   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:15.576700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:15.576700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:15 GMT
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Audit-Id: ac2f00ad-42e3-423c-856f-b3cae204d6ee
	I0603 05:46:15.576700   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:15.576942   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:16.070883   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:16.071037   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:16.071037   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:16.071037   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:16.074729   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:16.074820   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:16.074820   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:16.074820   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:16.074820   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:16.074888   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:16 GMT
	I0603 05:46:16.074888   10844 round_trippers.go:580]     Audit-Id: dc7f4f08-c8fd-486a-bafe-d8b154b85c93
	I0603 05:46:16.074888   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:16.074914   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:16.568347   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:16.568409   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:16.568409   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:16.568409   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:16.583832   10844 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 05:46:16.583832   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:16 GMT
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Audit-Id: 3a5afa14-1218-4de9-8aa2-7c8f3ef9a5b3
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:16.583832   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:16.583832   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:16.583832   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:16.584869   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:16.585890   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:17.069126   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:17.069371   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:17.069371   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:17.069371   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:17.073216   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:17.074235   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Audit-Id: 4ff4cb46-c2d9-4ac8-afe2-ee491e15edb1
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:17.074268   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:17.074268   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:17.074268   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:17 GMT
	I0603 05:46:17.074404   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:17.567851   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:17.567851   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:17.567851   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:17.567851   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:17.571459   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:17.572203   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:17 GMT
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Audit-Id: b9bdccbc-7de3-41d1-8655-a420ca08653c
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:17.572203   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:17.572203   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:17.572203   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:17.572203   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:18.065911   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:18.065911   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:18.065911   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:18.065911   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:18.069487   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:18.069487   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:18 GMT
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Audit-Id: 67e5227d-dcb4-43e6-b25a-897b79f42137
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:18.070298   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:18.070298   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:18.070298   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:18.070462   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:18.565739   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:18.565793   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:18.565793   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:18.565793   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:18.570357   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:18.570737   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:18.570737   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:18.570737   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:18 GMT
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Audit-Id: 61bb0a33-31fd-4a1a-9e61-a0bb097ee8a1
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:18.570737   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:18.571127   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:19.065584   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:19.065584   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:19.065711   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:19.065711   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:19.071741   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:19.071741   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:19.071741   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:19.071741   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:19 GMT
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Audit-Id: bf2d77e7-351e-421c-b07e-ede7d88cd4e1
	I0603 05:46:19.071741   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:19.072665   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:19.072665   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:19.576811   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:19.577060   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:19.577060   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:19.577060   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:19.580428   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:19.581433   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Audit-Id: bbee6213-6d48-4de1-904b-1f2bb2d1d301
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:19.581433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:19.581433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:19.581433   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:19 GMT
	I0603 05:46:19.582292   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:20.075910   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:20.075910   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:20.075910   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:20.075910   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:20.081097   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:20.081097   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:20.081097   10844 round_trippers.go:580]     Audit-Id: b0b54a45-379a-4c6a-8e4f-778e74972f17
	I0603 05:46:20.081097   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:20.081097   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:20.081263   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:20.081263   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:20.081263   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:20 GMT
	I0603 05:46:20.081575   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:20.575599   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:20.575807   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:20.575807   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:20.575807   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:20.580445   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:20.580748   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Audit-Id: 89ce28af-65d4-421e-9769-b9b912529747
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:20.580748   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:20.580748   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:20.580748   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:20 GMT
	I0603 05:46:20.581007   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:21.076001   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:21.076001   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:21.076001   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:21.076001   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:21.080618   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:21.080618   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:21.080788   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:21.080788   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:21 GMT
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Audit-Id: ea74ae3a-2bb4-4e64-a02a-736c4771d45c
	I0603 05:46:21.080788   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:21.081081   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:21.081731   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:21.577892   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:21.577892   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:21.577892   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:21.577892   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:21.582493   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:21.582822   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:21.582822   10844 round_trippers.go:580]     Audit-Id: 5fe8c26c-adc6-4506-a64f-89f7b9dd2651
	I0603 05:46:21.582822   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:21.582822   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:21.582822   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:21.582822   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:21.582916   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:21 GMT
	I0603 05:46:21.583116   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:22.078395   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:22.078395   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:22.078395   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:22.078395   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:22.084939   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:22.084939   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:22.085024   10844 round_trippers.go:580]     Audit-Id: cd385bef-d152-40c2-ad35-b19185cb0741
	I0603 05:46:22.085024   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:22.085081   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:22.085103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:22.085103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:22.085103   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:22 GMT
	I0603 05:46:22.085103   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:22.578126   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:22.578126   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:22.578223   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:22.578223   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:22.582030   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:22.583264   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:22.583264   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:22.583264   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:22 GMT
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Audit-Id: 2dc6c365-a588-478b-af58-f1f4e01df756
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:22.583349   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:22.583561   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:23.077114   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:23.077114   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:23.077114   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:23.077114   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:23.081800   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:23.081861   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Audit-Id: 2e0145cc-ae20-44a8-abf4-79d00fde2c68
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:23.081861   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:23.081861   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:23.081861   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:23 GMT
	I0603 05:46:23.082466   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:23.083109   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:23.575741   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:23.575741   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:23.576028   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:23.576028   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:23.580351   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:23.580351   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:23.580351   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:23 GMT
	I0603 05:46:23.580351   10844 round_trippers.go:580]     Audit-Id: 0a0d481c-34b4-4894-93a2-b466f6d64d14
	I0603 05:46:23.580351   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:23.580814   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:23.580814   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:23.580814   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:23.581667   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:24.073667   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:24.073667   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:24.073667   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:24.073667   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:24.077226   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:24.078230   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:24.078230   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:24.078230   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:24 GMT
	I0603 05:46:24.078230   10844 round_trippers.go:580]     Audit-Id: 526bfbed-8787-40b4-a45f-ddd6e3037735
	I0603 05:46:24.078230   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:24.078337   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:24.078337   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:24.079237   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:24.573646   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:24.573826   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:24.573826   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:24.573826   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:24.577479   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:24.577479   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:24.577479   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:24.577479   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:24 GMT
	I0603 05:46:24.577479   10844 round_trippers.go:580]     Audit-Id: 18bad2a6-97eb-4f1e-8654-2dcb107fc991
	I0603 05:46:24.578796   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:25.075565   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:25.075565   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:25.075565   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:25.075565   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:25.082159   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:25.082159   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:25.082159   10844 round_trippers.go:580]     Audit-Id: 5c1c0c7e-0f37-4c6d-97eb-91bafae935b6
	I0603 05:46:25.082159   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:25.082159   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:25.082159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:25.082159   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:25.082511   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:25 GMT
	I0603 05:46:25.083181   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:25.579199   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:25.579199   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:25.579199   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:25.579199   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:25.583802   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:25.583953   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:25 GMT
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Audit-Id: b49e5177-6df8-4437-9e5f-dae8488ceb0a
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:25.583953   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:25.583953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:25.583953   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:25.584438   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:25.585104   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:26.067574   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:26.067629   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:26.067695   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:26.067695   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:26.070160   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:26.070160   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:26.070160   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:26.070160   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:26 GMT
	I0603 05:46:26.070160   10844 round_trippers.go:580]     Audit-Id: c498fb7b-1ec7-4163-9fa4-8791b74dcb94
	I0603 05:46:26.070160   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:26.567460   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:26.567598   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:26.567598   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:26.567598   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:26.571673   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:26.571673   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:26.571778   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:26 GMT
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Audit-Id: d50127b8-d425-4711-a59e-31c71c173b3f
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:26.571778   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:26.571778   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:26.571952   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:27.066403   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:27.066403   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:27.066403   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:27.066403   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:27.070058   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:27.070403   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:27.070403   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:27.070490   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:27.070490   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:27.070490   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:27.070490   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:27 GMT
	I0603 05:46:27.070490   10844 round_trippers.go:580]     Audit-Id: 9ad1701f-1873-439d-b7aa-30d831faf859
	I0603 05:46:27.070490   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:27.568255   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:27.568255   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:27.568255   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:27.568255   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:27.572870   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:27.572870   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:27 GMT
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Audit-Id: 1bbdaa8e-dc6f-4fd7-a4c0-87e43b385069
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:27.573791   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:27.573791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:27.573791   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:27.573983   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:28.067438   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:28.067628   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:28.067628   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:28.067628   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:28.070996   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:28.071538   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:28.071538   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:28 GMT
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Audit-Id: 783b6c64-20f6-4b28-a7b4-9650b7d9822a
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:28.071538   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:28.071538   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:28.072320   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:28.072578   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:28.566384   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:28.566384   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:28.566384   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:28.566384   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:28.570732   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:28.570732   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Audit-Id: 490896be-5ac1-4ec2-9bcb-da70d04c90dc
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:28.570732   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:28.570732   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:28.570732   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:28 GMT
	I0603 05:46:28.570732   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:29.064639   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:29.064845   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:29.064845   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:29.064845   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:29.068499   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:29.069320   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:29.069320   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:29.069320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:29.069320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:29.069320   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:29 GMT
	I0603 05:46:29.069409   10844 round_trippers.go:580]     Audit-Id: 6211cb6b-f8e0-42a5-bd89-510fdcda5d1f
	I0603 05:46:29.069409   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:29.069836   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:29.578833   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:29.579067   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:29.579067   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:29.579067   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:29.587632   10844 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:46:29.587632   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Audit-Id: 4b2c084f-c84b-40fd-9d86-032803f81980
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:29.587632   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:29.587632   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:29.587632   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:29 GMT
	I0603 05:46:29.587632   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:30.074571   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:30.074727   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:30.074727   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:30.074727   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:30.078394   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:30.079315   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:30.079315   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:30 GMT
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Audit-Id: 8e95ebe1-32fd-4549-a9d6-5f81a10fe8d1
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:30.079315   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:30.079315   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:30.079691   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:30.080076   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:30.574878   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:30.574973   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:30.574973   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:30.574973   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:30.578776   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:30.578776   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Audit-Id: 87ed4985-acc0-48a0-a112-aac2d51a953e
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:30.579433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:30.579433   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:30.579433   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:30 GMT
	I0603 05:46:30.579677   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:31.063582   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:31.063582   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:31.063582   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:31.064007   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:31.067873   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:31.067924   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:31.067924   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:31 GMT
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Audit-Id: 62e24bde-a036-46a1-8346-e6d6b311c053
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:31.067924   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:31.067924   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:31.067924   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:31.563651   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:31.563651   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:31.563651   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:31.563651   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:31.567313   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:31.568228   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:31.568228   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:31 GMT
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Audit-Id: 7b28854b-7320-46d3-ac7c-bdaf60c86c7c
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:31.568228   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:31.568313   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:31.568410   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:32.065940   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:32.066010   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:32.066010   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:32.066010   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:32.070454   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:32.070454   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:32.070454   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:32.070454   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:32.070454   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:32 GMT
	I0603 05:46:32.070454   10844 round_trippers.go:580]     Audit-Id: f81f4c25-2293-45db-8b5d-32782581d530
	I0603 05:46:32.070552   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:32.070552   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:32.070806   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:32.566344   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:32.566435   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:32.566435   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:32.566435   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:32.569840   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:32.570779   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:32.570829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:32.570829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:32 GMT
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Audit-Id: e752cc27-7f11-47e9-ab87-7ef3b27e7b3b
	I0603 05:46:32.570829   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:32.570829   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:32.571447   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:33.070475   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:33.070475   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:33.070475   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:33.070555   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:33.074452   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:33.075226   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:33.075226   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:33.075226   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:33.075226   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:33 GMT
	I0603 05:46:33.075226   10844 round_trippers.go:580]     Audit-Id: 66bd618f-5e82-471e-8898-c94a374d0d7c
	I0603 05:46:33.075285   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:33.075285   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:33.075285   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:33.567934   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:33.567934   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:33.567934   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:33.567934   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:33.572289   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:33.572289   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:33.572289   10844 round_trippers.go:580]     Audit-Id: 5a770c06-e325-4b54-84d9-86ed273ace5b
	I0603 05:46:33.572524   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:33.572524   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:33.572524   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:33.572524   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:33.572524   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:33 GMT
	I0603 05:46:33.572643   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:34.070177   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:34.070177   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:34.070177   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:34.070177   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:34.075281   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:34.075281   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:34.075281   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:34 GMT
	I0603 05:46:34.075372   10844 round_trippers.go:580]     Audit-Id: 81138375-de80-4782-8b76-6f36480d0fbd
	I0603 05:46:34.075372   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:34.075372   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:34.075372   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:34.075372   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:34.075840   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:34.568295   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:34.568295   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:34.568295   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:34.568295   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:34.574232   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:34.574232   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:34.574302   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:34.574326   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:34.574326   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:34 GMT
	I0603 05:46:34.574354   10844 round_trippers.go:580]     Audit-Id: c5302c01-9acd-42d6-a5d0-7d94359e5a21
	I0603 05:46:34.574354   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:34.574354   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:34.574883   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:34.575126   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:35.072163   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:35.072163   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:35.072163   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:35.072163   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:35.076002   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:35.076525   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:35.076525   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:35.076525   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:35.076525   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:35.076525   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:35 GMT
	I0603 05:46:35.076585   10844 round_trippers.go:580]     Audit-Id: 47125028-a6f6-4006-81b0-669c128bb885
	I0603 05:46:35.076585   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:35.076585   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:35.570924   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:35.571032   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:35.571032   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:35.571032   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:35.574720   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:35.575542   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:35.575542   10844 round_trippers.go:580]     Audit-Id: e0a91b25-751c-4d83-b7c6-2cae33cd48ca
	I0603 05:46:35.575616   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:35.575616   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:35.575616   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:35.575616   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:35.575616   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:35 GMT
	I0603 05:46:35.575616   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:36.068877   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:36.068978   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:36.068978   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:36.068978   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:36.071960   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:36.071960   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Audit-Id: 8ef66d3a-f616-41d7-914d-bb314100956f
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:36.071960   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:36.071960   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:36.071960   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:36 GMT
	I0603 05:46:36.072910   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:36.567342   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:36.567342   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:36.567342   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:36.567342   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:36.571089   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:36.571089   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:36.571378   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:36.571378   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:36 GMT
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Audit-Id: 65d463e1-73ba-49f4-a6f4-de645f6dbcff
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:36.571378   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:36.571690   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:37.067666   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:37.067740   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:37.067740   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:37.067740   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:37.071536   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:37.071987   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Audit-Id: 46739dcc-701d-4c3c-9c49-db76061f796c
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:37.071987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:37.071987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:37.071987   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:37 GMT
	I0603 05:46:37.072456   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:37.072953   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:37.568495   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:37.568495   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:37.568495   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:37.568495   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:37.573122   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:37.573210   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:37.573210   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:37.573210   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:37.573210   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:37 GMT
	I0603 05:46:37.573323   10844 round_trippers.go:580]     Audit-Id: 339cba3f-9192-485a-bd19-c4e2b6aecbc4
	I0603 05:46:37.573323   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:37.573323   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:37.573468   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:38.067970   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:38.067970   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:38.067970   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:38.067970   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:38.071756   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:38.072739   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:38.072739   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Audit-Id: a1577ff5-a08c-41cd-8a52-cbea27e548e7
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:38.072739   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:38.072739   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:38.073047   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:38.566184   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:38.566184   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:38.566184   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:38.566184   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:38.570579   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:38.570579   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:38.570579   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:38.570579   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 05:46:38.570579   10844 round_trippers.go:580]     Audit-Id: 2dbb8271-d0e2-4bd3-9e51-88a9aa5dbf9a
	I0603 05:46:38.570579   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:39.066774   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:39.066774   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:39.066774   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:39.066774   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:39.072360   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:39.072421   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Audit-Id: 1d65e360-1b71-458b-aa79-1993565c0c86
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:39.072421   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:39.072421   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:39.072421   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 05:46:39.072805   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:39.073293   10844 node_ready.go:53] node "multinode-316400" has status "Ready":"False"
	I0603 05:46:39.568445   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:39.568445   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:39.568445   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:39.568445   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:39.573045   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:39.573462   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Audit-Id: 1c2891d0-e198-45f4-88bf-c34204b35d91
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:39.573462   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:39.573462   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:39.573462   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:39.574036   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:40.069965   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:40.070145   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:40.070145   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:40.070145   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:40.074896   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:40.074896   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:40.075566   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:40.075566   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 05:46:40.075566   10844 round_trippers.go:580]     Audit-Id: 279a2e19-d355-49d9-b371-e1837036748e
	I0603 05:46:40.075623   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:40.563817   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:40.563817   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:40.563898   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:40.563898   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:40.567202   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:40.567202   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:40.567202   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:40.567202   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:40.567202   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 05:46:40.567202   10844 round_trippers.go:580]     Audit-Id: f8a24f93-e404-4eb4-b0b4-d135d40a7083
	I0603 05:46:40.567923   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:40.567923   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:40.567992   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:41.066331   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:41.066331   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.066331   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.066331   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.069962   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:41.070860   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.070860   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.070860   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.070860   10844 round_trippers.go:580]     Audit-Id: deb0aff3-0585-46da-8c84-8d1e31951688
	I0603 05:46:41.071143   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1844","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0603 05:46:41.566381   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:41.566460   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.566460   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.566460   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.570868   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:41.570868   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Audit-Id: 0800d69b-66d4-4dce-b880-d5a1d269f949
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.570868   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.570868   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.570868   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.571004   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:41.571550   10844 node_ready.go:49] node "multinode-316400" has status "Ready":"True"
	I0603 05:46:41.571727   10844 node_ready.go:38] duration metric: took 36.0086201s for node "multinode-316400" to be "Ready" ...
	I0603 05:46:41.571727   10844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:46:41.571846   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:46:41.571892   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.571892   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.571892   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.579805   10844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:46:41.579805   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Audit-Id: 583c8ed6-c5b8-4236-b5a4-dc159faa73b6
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.579805   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.579805   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.579805   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.581748   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1890"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86508 chars]
	I0603 05:46:41.586176   10844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:46:41.586369   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:41.586369   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.586369   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.586369   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.592228   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:41.593103   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.593103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.593103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Audit-Id: af5eec5c-8c5c-4ff5-bbf5-27318c458233
	I0603 05:46:41.593103   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.593273   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:41.593893   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:41.593893   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:41.593974   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:41.593974   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:41.596240   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:41.596240   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:41.596240   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:41.596240   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:41.596240   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:41.596240   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 05:46:41.597120   10844 round_trippers.go:580]     Audit-Id: 9eb9f2f2-b7bd-464f-899d-8bda643967b0
	I0603 05:46:41.597120   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:41.597686   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:42.099382   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:42.099382   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.099472   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.099472   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.103829   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:42.103829   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.103829   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.103829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.103829   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.104565   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.104565   10844 round_trippers.go:580]     Audit-Id: 27d5169b-a1c3-4a70-856f-7332df0ca951
	I0603 05:46:42.104565   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.104883   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:42.105553   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:42.105704   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.105704   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.105704   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.110904   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:42.110904   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.110904   10844 round_trippers.go:580]     Audit-Id: 93bf5d72-e328-4c6b-837f-1add06a617ab
	I0603 05:46:42.110970   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.110970   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.110970   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.110997   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.110997   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.112656   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:42.603745   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:42.603851   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.603869   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.603869   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.607891   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:42.607891   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.607891   10844 round_trippers.go:580]     Audit-Id: 57969027-9bf0-4c88-a5bb-6b9927e3ad9f
	I0603 05:46:42.607891   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.607891   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.607891   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.608052   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.608052   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.608204   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:42.608535   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:42.608535   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:42.608535   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:42.608535   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:42.612139   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:42.612139   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:42.612139   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:42.612139   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 05:46:42.612139   10844 round_trippers.go:580]     Audit-Id: b87066dd-eabf-492f-a856-ff84c9ef9329
	I0603 05:46:42.612139   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:42.612887   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:42.612887   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:42.613247   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1889","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0603 05:46:43.090946   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:43.091014   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.091014   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.091014   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.099769   10844 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:46:43.099769   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.099769   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.100451   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Audit-Id: cb4a1f7c-3600-4af7-94f1-98584c83b695
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.100451   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.100617   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:43.101375   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:43.101375   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.101375   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.101375   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.103496   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:43.103496   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Audit-Id: 86b00535-9c31-4fa5-a0f9-ca96ec3bee13
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.103496   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.103496   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.103496   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.103496   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:43.591408   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:43.591408   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.591408   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.591408   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.597989   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:43.597989   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.598987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.598987   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Audit-Id: bb27ff89-be44-4c16-ae45-edfd25f59647
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.599010   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.599231   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:43.600304   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:43.600364   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:43.600364   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:43.600364   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:43.604685   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:43.604685   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:43.605191   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:43.605191   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Audit-Id: 8f43b6d8-2f96-4b28-bfa4-29d3d8df26cb
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:43.605191   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:43.605598   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:43.606075   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:44.090714   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:44.090714   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.090714   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.090714   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.095323   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:44.095619   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.095687   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Audit-Id: 93cedeaf-e621-47fb-9c6a-d61ed7f01d25
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.095687   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.095687   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.096591   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:44.097372   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:44.097372   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.097372   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.097457   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.100577   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:44.100577   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.100757   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Audit-Id: e7cac9d4-982e-41a5-b00d-d95928bb1b85
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.100757   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.100757   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.101182   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:44.592561   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:44.592561   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.592561   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.592561   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.596199   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:44.596199   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.596199   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.597228   10844 round_trippers.go:580]     Audit-Id: 925fb9b5-5a63-4d0f-8a62-743341c857ba
	I0603 05:46:44.597228   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.597228   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.597276   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.597276   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.597451   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:44.597734   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:44.598321   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:44.598321   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:44.598321   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:44.603700   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:44.603700   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:44.603700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:44.603700   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 05:46:44.603700   10844 round_trippers.go:580]     Audit-Id: 136878a4-6043-4cf0-9280-5cf09a8082da
	I0603 05:46:44.604624   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:45.097706   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:45.097706   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.097793   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.097793   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.101101   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:45.101101   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.101101   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.101101   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.101101   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.101188   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.101188   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.101188   10844 round_trippers.go:580]     Audit-Id: 2ed0fd87-87ef-466a-9aa3-9e0fb64882a3
	I0603 05:46:45.101394   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:45.102019   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:45.102019   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.102019   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.102019   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.104025   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:45.104397   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Audit-Id: cf7867dd-5cca-4769-8f05-37a786cd5cfb
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.104397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.104397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.104397   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.104629   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:45.588316   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:45.588316   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.588316   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.588316   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.593103   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:45.593103   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.593103   10844 round_trippers.go:580]     Audit-Id: 4ec62c64-d0ab-4f25-8e9a-9822a1f0630d
	I0603 05:46:45.593182   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.593182   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.593182   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.593182   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.593182   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.594431   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:45.595140   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:45.595140   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:45.595140   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:45.595140   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:45.597734   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:45.598666   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:45.598666   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:45.598666   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Audit-Id: 60c8a907-d001-4ea2-8142-f9818c010b7d
	I0603 05:46:45.598666   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:45.599083   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:46.091840   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:46.091840   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.091840   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.091840   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.096844   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:46.096918   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.096918   10844 round_trippers.go:580]     Audit-Id: d212f569-b4bf-461c-969c-d96458abebfb
	I0603 05:46:46.096918   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.096918   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.096918   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.096986   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.097009   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.097039   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:46.097869   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:46.097869   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.097869   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.097869   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.101612   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:46.102148   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.102148   10844 round_trippers.go:580]     Audit-Id: b04b45ae-82a8-46a6-afeb-9ceb29b28fed
	I0603 05:46:46.102220   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.102220   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.102220   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.102220   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.102220   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.102678   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:46.102951   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:46.587064   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:46.587064   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.587064   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.587064   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.592645   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:46.592694   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.592694   10844 round_trippers.go:580]     Audit-Id: a1ca5e3d-d184-4927-bf4e-98611b3a6e81
	I0603 05:46:46.592780   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.592780   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.592780   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.592780   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.592780   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.592960   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:46.593157   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:46.593732   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:46.593732   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:46.593732   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:46.597026   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:46.597026   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:46.597026   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:46.597026   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:46.597324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:46.597324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:46.597324   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 05:46:46.597324   10844 round_trippers.go:580]     Audit-Id: 35c6f14f-c16c-435b-b6c7-1fdb570eb043
	I0603 05:46:46.597694   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:47.101543   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:47.101748   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.101748   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.101748   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.105736   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:47.105816   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Audit-Id: ca108275-6223-4bf9-a5f0-4cc84a54f4a6
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.105816   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.105816   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.105816   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.106154   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:47.106590   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:47.106590   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.106590   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.106590   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.109186   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:47.109854   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.109854   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.109854   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Audit-Id: 4521aba3-9f74-44fe-b23f-721e15790843
	I0603 05:46:47.109854   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.110079   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:47.598036   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:47.598036   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.598124   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.598124   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.601455   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:47.601712   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.601712   10844 round_trippers.go:580]     Audit-Id: f9c684d6-a3cf-4d50-9e01-f47e721118ee
	I0603 05:46:47.601712   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.601712   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.601712   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.601712   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.601773   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.601908   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:47.602691   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:47.602691   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:47.602691   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:47.602691   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:47.605397   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:47.605397   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Audit-Id: a71c2d0e-e862-4994-be72-c02f866ee520
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:47.605397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:47.605397   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:47.605397   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 05:46:47.606161   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:48.096038   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:48.096038   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.096038   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.096038   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.100703   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:48.100896   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.100896   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.100896   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.100896   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.100896   10844 round_trippers.go:580]     Audit-Id: ae2e4b1c-82d1-4c35-ac60-c37e7224cd64
	I0603 05:46:48.100896   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.100974   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.100974   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:48.101975   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:48.102055   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.102055   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.102055   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.104288   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:48.105295   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Audit-Id: 322b0b0c-ea20-408e-a436-ecb60f637781
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.105341   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.105341   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.105341   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.105758   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:48.106227   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:48.594461   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:48.594764   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.594764   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.594764   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.598657   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:48.599655   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.599655   10844 round_trippers.go:580]     Audit-Id: 720fb9c1-514b-4fa4-9a8f-05ce7c92329e
	I0603 05:46:48.599655   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.599655   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.599655   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.599655   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.599754   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.600156   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:48.600904   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:48.600904   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:48.600904   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:48.600904   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:48.603669   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:48.603669   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Audit-Id: 92eebe15-88e2-4ab5-90d9-831fedb9feda
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:48.603669   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:48.603669   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:48.603669   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 05:46:48.604659   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:49.089944   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:49.089944   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.089944   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.089944   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.094814   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:49.095040   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Audit-Id: 7afcd4ad-a024-4cae-ae1d-35ac201565d9
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.095040   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.095040   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.095040   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.095204   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:49.096542   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:49.096542   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.096542   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.096542   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.099424   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:49.099424   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.099424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.099424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Audit-Id: 9e37f3a9-2b34-4640-aafd-192c28452379
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.099424   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.099928   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:49.588138   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:49.588138   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.588138   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.588219   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.593035   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:49.593035   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Audit-Id: a9062b19-bb98-47fd-ba54-46ce395c00a4
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.593035   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.593035   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.593035   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.593035   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:49.594257   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:49.594257   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:49.594257   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:49.594329   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:49.597669   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:49.597669   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:49.597669   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 05:46:49.598147   10844 round_trippers.go:580]     Audit-Id: 7a792bcf-eb47-49e9-af10-de3d436655c0
	I0603 05:46:49.598147   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:49.598147   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:49.598147   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:49.598147   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:49.598247   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:50.090419   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:50.090481   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.090481   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.090481   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.095354   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:50.095470   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.095470   10844 round_trippers.go:580]     Audit-Id: 45443a5b-ef6b-4809-8f22-caeae74ece9c
	I0603 05:46:50.095470   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.095470   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.095543   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.095543   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.095543   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.095727   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:50.096702   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:50.096772   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.096772   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.096772   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.100114   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:50.100114   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Audit-Id: c79270e3-d329-4fb2-b2a2-94094173db8c
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.100114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.100114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.100114   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.100114   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:50.589021   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:50.589021   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.589021   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.589021   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.592617   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:50.593195   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.593195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.593195   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Audit-Id: afba2f2f-c402-4d71-b56a-b80a2f3717f7
	I0603 05:46:50.593195   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.593195   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:50.594187   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:50.594187   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:50.594264   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:50.594264   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:50.596495   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:50.596495   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:50.596495   10844 round_trippers.go:580]     Audit-Id: 9738cbfc-3e55-46e9-9b7c-4363e23525e6
	I0603 05:46:50.596495   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:50.596495   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:50.596495   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:50.596495   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:50.597315   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 05:46:50.598173   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:50.598173   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:51.088879   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:51.088879   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.088879   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.088879   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.092443   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:51.093343   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.093343   10844 round_trippers.go:580]     Audit-Id: f2b2380a-e67d-4700-b5e6-9172bde419f4
	I0603 05:46:51.093343   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.093343   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.093403   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.093403   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.093403   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.093403   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:51.094283   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:51.094283   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.094283   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.094283   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.099858   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:51.099858   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.099858   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.099858   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.099858   10844 round_trippers.go:580]     Audit-Id: 6a8112c4-9d14-4a94-b89e-dee65725a642
	I0603 05:46:51.099858   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:51.590439   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:51.590439   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.590439   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.590439   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.595216   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:51.595216   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.595216   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.595216   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.595216   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.595216   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.595571   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.595571   10844 round_trippers.go:580]     Audit-Id: d27e589a-1969-40f4-86aa-57de5ec2d3c4
	I0603 05:46:51.595950   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:51.596728   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:51.596728   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:51.596728   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:51.596728   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:51.600071   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:51.600071   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Audit-Id: 562e3c31-bdc4-4fbc-9263-0afe243cb053
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:51.600071   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:51.600071   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:51.600071   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 05:46:51.600739   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:52.087802   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:52.087802   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.087872   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.087872   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.093564   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:52.093564   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.093656   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.093656   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Audit-Id: ad249f91-6b0b-447b-873f-c5a9fa7ae951
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.093656   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.093863   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:52.094443   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:52.094443   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.094443   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.094443   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.098158   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:52.098158   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.098158   10844 round_trippers.go:580]     Audit-Id: 3c62881f-f33a-47c8-8c6d-96c853aa132e
	I0603 05:46:52.098230   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.098230   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.098230   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.098230   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.098230   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.098301   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:52.600282   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:52.600282   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.600369   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.600369   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.605074   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:52.605074   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.605074   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Audit-Id: ffd49097-2ea6-4cc7-8b1b-65a1c98feede
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.605074   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.605074   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.605074   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:52.606227   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:52.606227   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:52.606227   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:52.606227   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:52.609393   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:52.609393   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Audit-Id: efb0837c-2971-4a51-89a7-44ca1ef1e9ab
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:52.609393   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:52.609393   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:52.609393   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:52.609393   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:52.610134   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:53.101620   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:53.101681   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.101681   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.101681   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.108280   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:46:53.108280   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Audit-Id: 4c81f744-80fb-4a22-8695-9431833c3e42
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.108639   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.108639   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.108639   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.109029   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:53.109791   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:53.109791   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.109920   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.109920   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.132408   10844 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0603 05:46:53.132408   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Audit-Id: 8da6ef5b-785e-423c-8c17-48d19ff52664
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.132482   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.132482   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.132482   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.132814   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:53.587823   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:53.587823   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.587823   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.587823   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.592384   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:53.592445   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.592445   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.592445   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.592445   10844 round_trippers.go:580]     Audit-Id: eb713599-0b71-4e64-b070-1f158e15df3e
	I0603 05:46:53.592803   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:53.593085   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:53.593624   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:53.593624   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:53.593624   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:53.599215   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:53.599215   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Audit-Id: 2cefec6c-1577-4cf3-9c20-e04443c2b9ea
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:53.599215   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:53.599215   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:53.599215   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 05:46:53.599930   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:54.087345   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:54.087522   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.087522   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.087522   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.091102   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:54.092086   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.092086   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.092086   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.092086   10844 round_trippers.go:580]     Audit-Id: 4d477120-e7aa-497a-913f-16a24bceb6e3
	I0603 05:46:54.092317   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:54.093171   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:54.093171   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.093171   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.093171   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.095742   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:54.095742   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.096176   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.096176   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.096249   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.096333   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.096540   10844 round_trippers.go:580]     Audit-Id: 170c493e-d4ac-45f6-8933-fd45c55eddfb
	I0603 05:46:54.096592   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.096870   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:54.601527   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:54.601527   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.601527   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.601527   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.605467   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:54.606324   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Audit-Id: e591b238-fbd1-4190-bcb2-931e7d4f16b7
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.606324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.606324   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.606324   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.607211   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:54.607933   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:54.607933   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:54.607933   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:54.607933   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:54.611522   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:54.611766   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Audit-Id: cd5fc7f8-c2a7-44af-bee1-1af246633fb9
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:54.611766   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:54.611766   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:54.611850   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:54.612511   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:54.613367   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:55.099434   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:55.099434   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.099434   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.099434   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.103807   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:55.103807   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.104730   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.104730   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.104730   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.104730   10844 round_trippers.go:580]     Audit-Id: 6e5eebcd-6723-4f5b-b30e-d9fc65dbd2c4
	I0603 05:46:55.104830   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.104830   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.105035   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:55.105892   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:55.105892   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.105892   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.105892   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.110903   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:55.110903   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Audit-Id: 4964b26b-1723-4218-b263-8d2bbc28f2ab
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.110903   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.110903   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.110903   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.111448   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:55.587077   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:55.587077   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.587187   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.587187   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.591314   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:55.591697   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.591697   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.591697   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.591697   10844 round_trippers.go:580]     Audit-Id: af6f64c4-9f55-4c16-a696-d2510ee5e6b1
	I0603 05:46:55.592139   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:55.592843   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:55.592922   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:55.592922   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:55.592922   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:55.598914   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:46:55.598914   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:55.598914   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:55 GMT
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Audit-Id: 30c925a1-0569-4d0f-a251-21408d1536a2
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:55.598914   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:55.598914   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:55.598914   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:56.101735   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:56.101735   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.101735   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.101735   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.106313   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:56.106313   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.106313   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.106313   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.106313   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.106313   10844 round_trippers.go:580]     Audit-Id: 1eddcf0d-da5a-4445-b23e-650fbfc15ee1
	I0603 05:46:56.106429   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.106429   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.106604   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:56.107414   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:56.107485   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.107485   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.107485   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.109773   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:56.109773   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.109773   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.110549   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.110549   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.110549   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.110549   10844 round_trippers.go:580]     Audit-Id: 3e21a8f0-6f10-4585-ab57-330ad2b8d7b2
	I0603 05:46:56.110549   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.110753   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:56.599817   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:56.599817   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.599817   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.599817   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.602405   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:56.603246   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.603246   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.603246   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.603246   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.603328   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.603328   10844 round_trippers.go:580]     Audit-Id: d1a656fd-9164-44a5-9ceb-ad1cff9de083
	I0603 05:46:56.603328   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.603575   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:56.604097   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:56.604097   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:56.604097   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:56.604097   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:56.606670   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:56.606670   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Audit-Id: 540db8f7-b5ab-4875-885a-fe44442f05dd
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:56.606670   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:56.606670   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:56.606670   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:56 GMT
	I0603 05:46:56.607757   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:57.096624   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:57.096813   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.096813   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.096813   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.100612   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:57.101149   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.101149   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Audit-Id: fa35b893-902b-4b1b-81b9-30e9943ac660
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.101149   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.101149   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.101407   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:57.101744   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:57.102273   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.102273   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.102273   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.106922   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:57.106922   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.106922   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.106922   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Audit-Id: 99f98ac1-298c-4b65-bacc-7bebdff9b954
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.106922   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.107609   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:57.107804   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:57.597452   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:57.597452   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.597452   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.597452   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.602065   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:57.602065   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.602065   10844 round_trippers.go:580]     Audit-Id: 72c1de68-358d-4304-973b-863283f8f124
	I0603 05:46:57.602065   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.602065   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.602065   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.602498   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.602498   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.602994   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:57.603624   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:57.603624   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:57.603624   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:57.603624   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:57.607998   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:57.607998   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Audit-Id: 29143fcd-0c3a-40ab-b72c-95381e387c84
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:57.607998   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:57.607998   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:57.607998   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:57 GMT
	I0603 05:46:57.608873   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:58.098684   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:58.098684   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.098796   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.098796   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.102942   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:46:58.102942   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.102942   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Audit-Id: 62725f5d-d69b-4115-b212-37447c8a8e8a
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.102942   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.102942   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.102942   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:58.104188   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:58.104246   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.104246   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.104246   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.107675   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:58.107675   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Audit-Id: f3ee7798-ddec-4f3d-8965-60e65f0954cf
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.107743   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.107743   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.107743   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.108306   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:58.599203   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:58.599203   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.599203   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.599203   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.602801   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:58.603552   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.603552   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.603552   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.603552   10844 round_trippers.go:580]     Audit-Id: 98794095-a4eb-488f-a093-059538800e84
	I0603 05:46:58.603820   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:58.604626   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:58.604698   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:58.604698   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:58.604698   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:58.608080   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:58.608080   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:58 GMT
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Audit-Id: 621c6cad-b949-4304-908f-c983b9c26292
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:58.608080   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:58.608080   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:58.608080   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:58.609250   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:59.099044   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:59.099044   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.099044   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.099044   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.102669   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:59.103392   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.103392   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.103392   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.103392   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.103392   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.103523   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.103523   10844 round_trippers.go:580]     Audit-Id: 44ee76ac-1b9b-4b69-bcca-065b6c082cac
	I0603 05:46:59.103699   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:59.104592   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:59.104695   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.104695   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.104695   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.108015   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:59.108097   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.108097   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Audit-Id: d421c825-8a83-4c80-b61b-02756b227db3
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.108097   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.108097   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.108309   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:46:59.108840   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:46:59.598506   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:46:59.598506   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.598506   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.598506   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.602203   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:46:59.602203   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.602203   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.603194   10844 round_trippers.go:580]     Audit-Id: b95d7f54-ed6f-4a2f-a7ab-4dda251bba59
	I0603 05:46:59.603194   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.603221   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.603221   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.603221   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.603221   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:46:59.604357   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:46:59.604357   10844 round_trippers.go:469] Request Headers:
	I0603 05:46:59.604412   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:46:59.604412   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:46:59.606804   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:46:59.606804   10844 round_trippers.go:577] Response Headers:
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Audit-Id: 427c2bf3-a0bc-47ca-88a9-6cfe21e8d39d
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:46:59.607417   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:46:59.607417   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:46:59.607417   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:59 GMT
	I0603 05:46:59.607844   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:00.099100   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:00.099100   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.099213   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.099213   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.102619   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:00.103465   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.103465   10844 round_trippers.go:580]     Audit-Id: 0aad60ad-7839-4aa8-9d75-04d7bf98312e
	I0603 05:47:00.103526   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.103526   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.103526   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.103526   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.103526   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.103778   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:00.104382   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:00.104382   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.104382   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.104382   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.106968   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:00.106968   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Audit-Id: 228407a3-9ca4-4994-8f3c-b392b9e4da13
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.106968   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.106968   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.106968   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.107439   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:00.600209   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:00.600371   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.600371   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.600451   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.604158   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:00.604917   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.604917   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.604917   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.604917   10844 round_trippers.go:580]     Audit-Id: 06c5f899-bddb-485c-beab-8da0a71f44f6
	I0603 05:47:00.605162   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:00.606474   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:00.606474   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:00.606474   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:00.606474   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:00.609106   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:00.609106   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:00.609106   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:00.609106   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:00.609106   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:00.609106   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 05:47:00.609964   10844 round_trippers.go:580]     Audit-Id: a2b91db9-b0d3-4352-87be-9bf0280a67f3
	I0603 05:47:00.609964   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:00.610936   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:01.100125   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:01.100270   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.100270   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.100270   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.104057   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:01.104057   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.104057   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.104057   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.104661   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.104661   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.104661   10844 round_trippers.go:580]     Audit-Id: 6a316f66-9036-48b1-8557-9c19c33f22fb
	I0603 05:47:01.104661   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.104970   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:01.105826   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:01.105826   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.105826   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.105826   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.112171   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:01.112171   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Audit-Id: e212d988-fdbb-470f-9ec8-64d75e89b25b
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.112171   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.112171   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.112171   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.112171   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:01.112943   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:47:01.600025   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:01.600025   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.600025   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.600025   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.604850   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:01.605410   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.605410   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.605410   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.605410   10844 round_trippers.go:580]     Audit-Id: f31112ce-8e1a-4169-8d15-bfcf31e0fc72
	I0603 05:47:01.605674   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:01.605830   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:01.605830   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:01.605830   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:01.605830   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:01.611853   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:01.611897   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Audit-Id: d04f0410-9684-4563-9c35-648067c75858
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:01.611897   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:01.611897   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:01.611897   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:01 GMT
	I0603 05:47:01.612633   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:02.097580   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:02.097702   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.097702   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.097702   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.102187   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:02.102187   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.102187   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.102187   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Audit-Id: 78f0850a-8e27-47e8-be59-58df6cc90b09
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.102521   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.102741   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:02.103579   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:02.103596   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.103596   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.103596   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.106578   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:02.106694   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.106694   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.106694   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Audit-Id: ba15f9b3-3415-4a1d-b975-59100a12178a
	I0603 05:47:02.106694   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.107029   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:02.597164   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:02.597164   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.597164   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.597164   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.602382   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:02.602382   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.602382   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.602382   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Audit-Id: b9ff1126-9189-4e4c-aa9f-2ef453ed71ba
	I0603 05:47:02.602382   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.602382   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:02.603102   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:02.603102   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:02.603102   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:02.603102   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:02.606978   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:02.607114   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Audit-Id: a0fdd326-e683-4a52-8b1a-91948eb6e25d
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:02.607114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:02.607114   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:02.607114   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:02 GMT
	I0603 05:47:02.607557   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:03.097400   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:03.097400   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.097494   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.097494   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.103819   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:03.103902   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.103929   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.103929   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.103929   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.103929   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.103929   10844 round_trippers.go:580]     Audit-Id: 47039cf0-45f6-4c6f-bee3-0f0890a4fb11
	I0603 05:47:03.103962   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.104038   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:03.104897   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:03.104897   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.104897   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.104897   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.108128   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:03.108128   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Audit-Id: 0cfdc209-2a80-497f-8551-86538ed0a330
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.108128   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.108128   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.108128   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.108128   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:03.596353   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:03.596353   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.596353   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.596353   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.601075   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:03.601292   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.601292   10844 round_trippers.go:580]     Audit-Id: c95e241b-37bc-4fc7-b34c-62ffad918fa1
	I0603 05:47:03.601292   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.601292   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.601292   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.601391   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.601391   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.601622   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:03.602431   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:03.602431   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:03.602431   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:03.602431   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:03.606000   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:03.606000   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:03.606000   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:03.606000   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:03.606000   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:03.606000   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:03.606000   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:03 GMT
	I0603 05:47:03.606338   10844 round_trippers.go:580]     Audit-Id: 4d11eb09-22be-461d-9b15-50f217bf7945
	I0603 05:47:03.606661   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:03.607236   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:47:04.096662   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:04.096766   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.096766   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.096766   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.101198   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:04.101198   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Audit-Id: 157d21e3-5922-4f4b-bcf5-86d614ae3629
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.101198   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.101198   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.101198   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.101500   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:04.102250   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:04.102250   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.102250   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.102322   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.104160   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:47:04.104160   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.105157   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Audit-Id: 33cb0ac4-bc1d-4086-ae7c-d8202de61269
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.105178   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.105178   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.105327   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:04.599511   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:04.599642   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.599642   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.599642   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.604451   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:04.604451   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.604451   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Audit-Id: d89f6690-271a-4ac5-8712-1ae5c1866e66
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.604451   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.604451   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.604451   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:04.605615   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:04.605615   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:04.605678   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:04.605678   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:04.608044   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:04.608992   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:04.608992   10844 round_trippers.go:580]     Audit-Id: 70fbb42b-4171-482e-ad67-67bea4a635ec
	I0603 05:47:04.608992   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:04.608992   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:04.609043   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:04.609043   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:04.609043   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:04 GMT
	I0603 05:47:04.609317   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:05.088714   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:05.088714   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.088714   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.088714   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.093322   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:05.093322   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.093322   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.093322   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.093499   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.093499   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.093499   10844 round_trippers.go:580]     Audit-Id: cde01b6d-0720-4209-aef4-38850b17c982
	I0603 05:47:05.093529   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.094317   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:05.095109   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:05.095180   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.095180   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.095180   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.098213   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:05.098213   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.098213   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.098213   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Audit-Id: 2954bc53-337a-441e-baec-25fcc96db60d
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.098213   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.098572   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:05.598196   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:05.598273   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.598273   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.598399   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.601694   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:05.602194   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.602194   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.602194   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Audit-Id: bc2cfc8e-465a-4e60-a34c-33bba9966948
	I0603 05:47:05.602194   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.602451   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:05.603232   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:05.603232   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:05.603232   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:05.603346   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:05.605740   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:05.605740   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:05.605740   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:05.606037   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:05 GMT
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Audit-Id: a771c840-d791-41e5-8aef-3d3555e3bab2
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:05.606037   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:05.606331   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:06.091494   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:06.091587   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.091587   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.091634   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.095559   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:06.095635   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.095635   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.095635   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.095635   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.095700   10844 round_trippers.go:580]     Audit-Id: 1789ef2c-8f11-4086-b0a3-c03447bdbad5
	I0603 05:47:06.095700   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.095700   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.095700   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:06.097220   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:06.097220   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.097220   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.097220   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.100155   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:06.100317   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.100317   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.100317   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.100317   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.100317   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.100317   10844 round_trippers.go:580]     Audit-Id: 9370556c-d96e-493a-974b-51d6304bd102
	I0603 05:47:06.100400   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.100813   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:06.101346   10844 pod_ready.go:102] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"False"
	I0603 05:47:06.599417   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:06.599417   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.599417   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.599417   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.603964   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:06.604447   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.604447   10844 round_trippers.go:580]     Audit-Id: 5eeafd59-3dd5-46bc-a4d0-2c92bb30dda2
	I0603 05:47:06.604447   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.604447   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.604447   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.604517   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.604517   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.605434   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1739","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0603 05:47:06.606418   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:06.606418   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:06.606418   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:06.606418   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:06.609663   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:06.609719   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:06.609719   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:06.609719   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:06 GMT
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Audit-Id: 2b554633-7673-49f5-a72d-ddb67aed1c31
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:06.609719   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:06.609719   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.102159   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:47:07.102159   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.102486   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.102486   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.108141   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:07.108245   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.108245   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.108245   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.108245   10844 round_trippers.go:580]     Audit-Id: b45243ce-e442-4a87-91c3-27b98cedf22d
	I0603 05:47:07.108535   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0603 05:47:07.109278   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.109350   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.109350   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.109350   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.113677   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:07.113970   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.113970   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.113970   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Audit-Id: f3764f23-4356-448a-809e-46d35400c2cd
	I0603 05:47:07.113970   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.114279   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.114807   10844 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.114807   10844 pod_ready.go:81] duration metric: took 25.528442s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.114807   10844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.114898   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:47:07.114976   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.114976   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.114976   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.120765   10844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 05:47:07.120765   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.120765   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Audit-Id: 3fe523be-d456-4a71-8e04-aa0a7a390cb7
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.120765   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.120765   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.121397   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1822","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0603 05:47:07.122030   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.122138   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.122168   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.122168   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.124801   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.124801   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.124801   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.124801   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Audit-Id: 26ba75ac-3bf0-47a0-8973-5b6d7b97958f
	I0603 05:47:07.124801   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.125478   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.125872   10844 pod_ready.go:92] pod "etcd-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.125930   10844 pod_ready.go:81] duration metric: took 11.1227ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.125982   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.126105   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:47:07.126136   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.126136   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.126136   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.129386   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.129473   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.129473   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.129473   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Audit-Id: e94fc1be-cee3-47c8-a784-dfe73aed0dea
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.129473   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.129473   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"1c07a75f-fb00-4529-a699-378974ce494b","resourceVersion":"1830","creationTimestamp":"2024-06-03T12:46:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.95.88:8443","kubernetes.io/config.hash":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.mirror":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.seen":"2024-06-03T12:45:54.794125775Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0603 05:47:07.130310   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.130381   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.130381   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.130381   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.132679   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.132679   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.132679   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.133083   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.133083   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.133137   10844 round_trippers.go:580]     Audit-Id: 8619a4b7-5646-4c6e-9273-ebcaabb3d40e
	I0603 05:47:07.133137   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.133137   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.133137   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.133137   10844 pod_ready.go:92] pod "kube-apiserver-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.133137   10844 pod_ready.go:81] duration metric: took 7.1551ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.133721   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.133766   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:47:07.133766   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.133877   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.133877   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.140103   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:07.140103   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.140103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.140103   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Audit-Id: 159d5dde-1723-42d0-afff-9039ea610a9e
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.140103   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.140640   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"1827","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0603 05:47:07.140843   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.140843   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.140843   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.140843   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.142979   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.142979   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Audit-Id: a86b9720-4652-462c-b6ed-be6ab14218ff
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.142979   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.142979   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.142979   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.143942   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.143942   10844 pod_ready.go:92] pod "kube-controller-manager-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.143942   10844 pod_ready.go:81] duration metric: took 10.2215ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.143942   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.143942   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:47:07.143942   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.143942   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.143942   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.147003   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.147003   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.147003   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.147003   10844 round_trippers.go:580]     Audit-Id: 86000150-4726-4e8e-890d-d83b7449c0e3
	I0603 05:47:07.147003   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.148042   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.148042   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.148042   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.148335   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dl97g","generateName":"kube-proxy-","namespace":"kube-system","uid":"78665ab7-c6dd-4381-8b29-75df4d31eff1","resourceVersion":"1713","creationTimestamp":"2024-06-03T12:30:58Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:30:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0603 05:47:07.148413   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:47:07.148413   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.148999   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.148999   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.151431   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:47:07.151431   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Audit-Id: 52d42757-7111-4838-908c-dfd00087f27c
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.151431   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.151431   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.151431   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.151431   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m03","uid":"39dbcb4e-fdeb-4463-8bde-9cfa6cead308","resourceVersion":"1870","creationTimestamp":"2024-06-03T12:41:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_41_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:41:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0603 05:47:07.151431   10844 pod_ready.go:97] node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:47:07.151431   10844 pod_ready.go:81] duration metric: took 7.4891ms for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	E0603 05:47:07.151431   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:47:07.151431   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.304519   10844 request.go:629] Waited for 152.865ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:47:07.304766   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:47:07.304766   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.304766   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.304766   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.311533   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:07.311533   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.311533   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Audit-Id: bd96bae8-2fe9-4fb9-b5a4-cde2f9b34461
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.311533   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.311533   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.311533   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"1752","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0603 05:47:07.507286   10844 request.go:629] Waited for 194.4376ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.507375   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:07.507375   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.507375   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.507375   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.511274   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.511274   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.511274   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.511274   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.511934   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.511934   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.511934   10844 round_trippers.go:580]     Audit-Id: a10ca4f9-3fb3-40b8-9ca5-ddcd20ac08e7
	I0603 05:47:07.511984   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.512249   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:07.512622   10844 pod_ready.go:92] pod "kube-proxy-ks64x" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:07.512622   10844 pod_ready.go:81] duration metric: took 361.1893ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.512622   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:07.710123   10844 request.go:629] Waited for 197.2536ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:47:07.710199   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:47:07.710199   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.710199   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.710199   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.713992   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.713992   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.713992   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.713992   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.713992   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.713992   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.713992   10844 round_trippers.go:580]     Audit-Id: 7711a4e9-cb4d-47b3-a381-a33dbc407eb2
	I0603 05:47:07.714916   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.715186   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"1913","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0603 05:47:07.912958   10844 request.go:629] Waited for 196.7258ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:47:07.913242   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:47:07.913242   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:07.913306   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:07.913306   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:07.916688   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:07.916688   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:07.916688   10844 round_trippers.go:580]     Audit-Id: d305dd8b-b2e2-4410-b6b8-847a151efc81
	I0603 05:47:07.917072   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:07.917072   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:07.917072   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:07.917072   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:07.917072   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:07 GMT
	I0603 05:47:07.918033   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136","resourceVersion":"1918","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_26_18_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4582 chars]
	I0603 05:47:07.918033   10844 pod_ready.go:97] node "multinode-316400-m02" hosting pod "kube-proxy-z26hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m02" has status "Ready":"Unknown"
	I0603 05:47:07.918033   10844 pod_ready.go:81] duration metric: took 405.4099ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	E0603 05:47:07.918033   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m02" hosting pod "kube-proxy-z26hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m02" has status "Ready":"Unknown"
	I0603 05:47:07.918652   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:08.115342   10844 request.go:629] Waited for 196.4696ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:47:08.115342   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:47:08.115342   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:08.115342   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:08.115342   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:08.119192   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:08.119192   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:08.119192   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:08.119192   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:08 GMT
	I0603 05:47:08.119192   10844 round_trippers.go:580]     Audit-Id: 2f941bfa-9707-40b0-8241-6cb30bab08f1
	I0603 05:47:08.119192   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:08.119729   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:08.119729   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:08.119729   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"1854","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0603 05:47:08.303029   10844 request.go:629] Waited for 182.153ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:08.303135   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:47:08.303355   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:08.303355   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:08.303355   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:08.308062   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:47:08.308062   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:08.308062   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:08.308062   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:08.308062   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:08.308162   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:08 GMT
	I0603 05:47:08.308162   10844 round_trippers.go:580]     Audit-Id: ba997dd1-1d76-4bbc-af0c-e5f7b50b67d2
	I0603 05:47:08.308162   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:08.308758   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:47:08.309566   10844 pod_ready.go:92] pod "kube-scheduler-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:47:08.309566   10844 pod_ready.go:81] duration metric: took 390.9119ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:47:08.309566   10844 pod_ready.go:38] duration metric: took 26.7377403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:47:08.309566   10844 api_server.go:52] waiting for apiserver process to appear ...
	I0603 05:47:08.319426   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 05:47:08.343243   10844 command_runner.go:130] > a9b10f4d479a
	I0603 05:47:08.343658   10844 logs.go:276] 1 containers: [a9b10f4d479a]
	I0603 05:47:08.352813   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 05:47:08.377442   10844 command_runner.go:130] > ef3c01484867
	I0603 05:47:08.377442   10844 logs.go:276] 1 containers: [ef3c01484867]
	I0603 05:47:08.387382   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 05:47:08.415325   10844 command_runner.go:130] > 4241e2ff2dfe
	I0603 05:47:08.415432   10844 command_runner.go:130] > 8280b3904678
	I0603 05:47:08.415456   10844 logs.go:276] 2 containers: [4241e2ff2dfe 8280b3904678]
	I0603 05:47:08.424932   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 05:47:08.448074   10844 command_runner.go:130] > 334bb0174b55
	I0603 05:47:08.448926   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:47:08.448926   10844 logs.go:276] 2 containers: [334bb0174b55 f39be6db7a1f]
	I0603 05:47:08.459567   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 05:47:08.484922   10844 command_runner.go:130] > 09616a16042d
	I0603 05:47:08.485166   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:47:08.485166   10844 logs.go:276] 2 containers: [09616a16042d ad08c7b8f3af]
	I0603 05:47:08.494224   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 05:47:08.524572   10844 command_runner.go:130] > cbaa09a85a64
	I0603 05:47:08.524572   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:47:08.524572   10844 logs.go:276] 2 containers: [cbaa09a85a64 3d7dc29a5791]
	I0603 05:47:08.534541   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 05:47:08.562029   10844 command_runner.go:130] > 3a08a76e2a79
	I0603 05:47:08.562029   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:47:08.563010   10844 logs.go:276] 2 containers: [3a08a76e2a79 a00a9dc2a937]
	I0603 05:47:08.563010   10844 logs.go:123] Gathering logs for kube-scheduler [f39be6db7a1f] ...
	I0603 05:47:08.563010   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f39be6db7a1f"
	I0603 05:47:08.596027   10844 command_runner.go:130] ! I0603 12:22:59.604855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:08.596204   10844 command_runner.go:130] ! W0603 12:23:00.885974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:08.596266   10844 command_runner.go:130] ! W0603 12:23:00.886217       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.596266   10844 command_runner.go:130] ! W0603 12:23:00.886249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:08.596370   10844 command_runner.go:130] ! W0603 12:23:00.886344       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:08.596370   10844 command_runner.go:130] ! I0603 12:23:00.957357       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:08.596370   10844 command_runner.go:130] ! I0603 12:23:00.957471       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.596370   10844 command_runner.go:130] ! I0603 12:23:00.962196       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:08.596449   10844 command_runner.go:130] ! I0603 12:23:00.962492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:08.596449   10844 command_runner.go:130] ! I0603 12:23:00.962588       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:08.596449   10844 command_runner.go:130] ! I0603 12:23:00.962719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:08.596505   10844 command_runner.go:130] ! W0603 12:23:00.975786       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.596578   10844 command_runner.go:130] ! E0603 12:23:00.976030       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.596601   10844 command_runner.go:130] ! W0603 12:23:00.976627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596669   10844 command_runner.go:130] ! E0603 12:23:00.976720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596693   10844 command_runner.go:130] ! W0603 12:23:00.977093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.596693   10844 command_runner.go:130] ! E0603 12:23:00.977211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.596766   10844 command_runner.go:130] ! W0603 12:23:00.977871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596828   10844 command_runner.go:130] ! E0603 12:23:00.978108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.596852   10844 command_runner.go:130] ! W0603 12:23:00.978352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.596922   10844 command_runner.go:130] ! E0603 12:23:00.978554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.596922   10844 command_runner.go:130] ! W0603 12:23:00.978915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597021   10844 command_runner.go:130] ! E0603 12:23:00.979166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597076   10844 command_runner.go:130] ! W0603 12:23:00.979907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597076   10844 command_runner.go:130] ! E0603 12:23:00.980156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597169   10844 command_runner.go:130] ! W0603 12:23:00.980358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597247   10844 command_runner.go:130] ! E0603 12:23:00.980393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597247   10844 command_runner.go:130] ! W0603 12:23:00.980479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597301   10844 command_runner.go:130] ! E0603 12:23:00.980561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597379   10844 command_runner.go:130] ! W0603 12:23:00.980991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597379   10844 command_runner.go:130] ! E0603 12:23:00.981244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597444   10844 command_runner.go:130] ! W0603 12:23:00.981380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.597473   10844 command_runner.go:130] ! E0603 12:23:00.981529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.597561   10844 command_runner.go:130] ! W0603 12:23:00.981800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.981883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:00.981956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.982200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:00.982090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.982650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:00.982102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:00.982927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.795531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.795655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.838399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.838478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.861969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.862351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:08.597588   10844 command_runner.go:130] ! W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598116   10844 command_runner.go:130] ! E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598175   10844 command_runner.go:130] ! E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:08.598305   10844 command_runner.go:130] ! I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:08.598305   10844 command_runner.go:130] ! E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 05:47:08.609144   10844 logs.go:123] Gathering logs for kindnet [3a08a76e2a79] ...
	I0603 05:47:08.609144   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a08a76e2a79"
	I0603 05:47:08.638188   10844 command_runner.go:130] ! I0603 12:46:03.050827       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051229       1 main.go:107] hostIP = 172.17.95.88
	I0603 05:47:08.638248   10844 command_runner.go:130] ! podIP = 172.17.95.88
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051377       1 main.go:116] setting mtu 1500 for CNI 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051397       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:03.051417       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.483366       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.505262       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.505362       1 main.go:227] handling current node
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506144       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.94.201 Flags: [] Table: 0} 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506651       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506661       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:33.506765       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512187       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512270       1 main.go:227] handling current node
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512283       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512290       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512906       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:43.512944       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529047       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529290       1 main.go:227] handling current node
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529365       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529466       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.638248   10844 command_runner.go:130] ! I0603 12:46:53.529947       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:46:53.530023       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:47:03.545370       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:47:03.545467       1 main.go:227] handling current node
	I0603 05:47:08.638810   10844 command_runner.go:130] ! I0603 12:47:03.545481       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.638929   10844 command_runner.go:130] ! I0603 12:47:03.545487       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.639065   10844 command_runner.go:130] ! I0603 12:47:03.545994       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.639065   10844 command_runner.go:130] ! I0603 12:47:03.546064       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.642263   10844 logs.go:123] Gathering logs for kubelet ...
	I0603 05:47:08.642263   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825136    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825207    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.826137    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: E0603 12:45:50.827240    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552269    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552416    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552941    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: E0603 12:45:51.553003    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711442    1519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711544    1519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.672942   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711817    1519 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.716147    1519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.748912    1519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.771826    1519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.772049    1519 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773407    1519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773557    1519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-316400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774457    1519 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774557    1519 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.775200    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778084    1519 kubelet.go:400] "Attempting to sync node with API server"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778299    1519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778455    1519 kubelet.go:312] "Adding apiserver pod source"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.782054    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.782432    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.785611    1519 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.790640    1519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.793090    1519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.794605    1519 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.796156    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.796271    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.797002    1519 server.go:1264] "Started kubelet"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.798266    1519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.801861    1519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.802334    1519 server.go:455] "Adding debug handlers to kubelet server"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.803283    1519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.803500    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.818343    1519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.844408    1519 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.846586    1519 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859495    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="200ms"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.859675    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859801    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860191    1519 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860329    1519 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860344    1519 factory.go:221] Registration of the systemd container factory successfully
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898244    1519 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898480    1519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 05:47:08.673832   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898596    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899321    1519 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899417    1519 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899447    1519 policy_none.go:49] "None policy: Start"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.900544    1519 reconciler.go:26] "Reconciler: start to sync state"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907485    1519 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907527    1519 state_mem.go:35] "Initializing new in-memory state store"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.908237    1519 state_mem.go:75] "Updated machine memory state"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.913835    1519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914035    1519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914854    1519 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.921784    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.927630    1519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-316400\" not found"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932254    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932281    1519 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932300    1519 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.935092    1519 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.940949    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.941116    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.948643    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.949875    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.957193    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.035350    1519 topology_manager.go:215] "Topology Admit Handler" podUID="29e4294fa112526de08d5737962f6330" podNamespace="kube-system" podName="kube-apiserver-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.036439    1519 topology_manager.go:215] "Topology Admit Handler" podUID="53c1415900cfae2b2544e26360f8c9e2" podNamespace="kube-system" podName="kube-controller-manager-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.037279    1519 topology_manager.go:215] "Topology Admit Handler" podUID="392dbbcc275890dd2b6fadbfc5aaee27" podNamespace="kube-system" podName="kube-scheduler-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.040156    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a77247d80dfdd462b8863b85ab8ad4bb" podNamespace="kube-system" podName="etcd-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041355    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf22fe66615444841b76ea00858c2d191b3808baedd9bc080bc40a07e173120c"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041413    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b8b906c7ece4b6d777a07a0cb2203eff03efdfae414479586ee928dfd93a0f"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041426    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8fbb688dfe331c1f384bb60f2e3169f09a613ebbfb33a15f502f1d3e605b1"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041486    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f0d5d979f878809d344310dbe1eff0bad9db5a6522da02c87fecce5e5aeee0"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.047918    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.063032    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="400ms"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.063221    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24225992b633386b5c5d178b106212b6c942a19a6f436ce076aaa359c121477"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.079235    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.093321    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.105962    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106038    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-certs\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106081    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-ca-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106112    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-ca-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.674871   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106140    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-k8s-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106216    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392dbbcc275890dd2b6fadbfc5aaee27-kubeconfig\") pod \"kube-scheduler-multinode-316400\" (UID: \"392dbbcc275890dd2b6fadbfc5aaee27\") " pod="kube-system/kube-scheduler-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106252    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-data\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106274    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-k8s-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106301    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106335    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-flexvolume-dir\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-kubeconfig\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.108700    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f366fa802e02ad1c75f843781b4cf6b39c2e71e08ec4fb65114ebe9cbf4901"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.152637    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.154286    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.473402    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="800ms"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.556260    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.558340    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.691400    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.691528    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.943127    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.943173    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.142169    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.150065    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.247409    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.247587    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.250356    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.250413    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.274392    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="1.6s"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.360120    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.361915    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.861220    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:45:57 multinode-316400 kubelet[1519]: I0603 12:45:57.964214    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604617    1519 kubelet_node_status.go:112] "Node was previously registered" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604775    1519 kubelet_node_status.go:76] "Successfully registered node" node="multinode-316400"
	I0603 05:47:08.675869   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.606910    1519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.607771    1519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.608805    1519 setters.go:580] "Node became not ready" node="multinode-316400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T12:46:00Z","lastTransitionTime":"2024-06-03T12:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.691329    1519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-316400\" already exists" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.791033    1519 apiserver.go:52] "Watching apiserver"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798319    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a3523f27-9775-4c1f-812f-a667faa1bace" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4hrc6"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798930    1519 topology_manager.go:215] "Topology Admit Handler" podUID="6815ff24-537b-42f3-b8ee-4c3e13be89f7" podNamespace="kube-system" podName="kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800209    1519 topology_manager.go:215] "Topology Admit Handler" podUID="60c8f253-7e07-4f56-b1f2-e0032ac6a8ce" podNamespace="kube-system" podName="kube-proxy-ks64x"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800471    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa" podNamespace="kube-system" podName="storage-provisioner"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800813    1519 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.801153    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.801692    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-316400" podUID="5a3b396d-1240-4c67-b2f5-e5664e068bfe"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.802378    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.833818    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-316400"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.848055    1519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.920366    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-cni-cfg\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923685    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-lib-modules\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923879    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-xtables-lock\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924084    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-xtables-lock\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924331    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbd73e44-9a7e-4b5f-93e5-d1621c837baa-tmp\") pod \"storage-provisioner\" (UID: \"bbd73e44-9a7e-4b5f-93e5-d1621c837baa\") " pod="kube-system/storage-provisioner"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924536    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-lib-modules\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.924884    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.925133    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.425053064 +0000 UTC m=+6.818668510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.947864    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171c5f025e4267e9949ddac2f1863980" path="/var/lib/kubelet/pods/171c5f025e4267e9949ddac2f1863980/volumes"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.949521    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79ce6c8ebbce53597babbe73b1962c9" path="/var/lib/kubelet/pods/b79ce6c8ebbce53597babbe73b1962c9/volumes"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.959965    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960012    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960141    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.460099085 +0000 UTC m=+6.853714631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.984966    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-316400" podStartSLOduration=0.984946212 podStartE2EDuration="984.946212ms" podCreationTimestamp="2024-06-03 12:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:00.911653941 +0000 UTC m=+6.305269487" watchObservedRunningTime="2024-06-03 12:46:00.984946212 +0000 UTC m=+6.378561658"
	I0603 05:47:08.676872   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430112    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430199    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.430180493 +0000 UTC m=+7.823795939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532174    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532233    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532300    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.532282929 +0000 UTC m=+7.925898375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: I0603 12:46:01.863329    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.165874    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.352473    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.353470    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-316400" podUID="0cdcee20-9dca-4eca-b92f-a7214368dd5e"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.376913    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442116    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442214    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.442196268 +0000 UTC m=+9.835811814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543119    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543210    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543279    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.543260694 +0000 UTC m=+9.936876140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935003    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935334    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:03 multinode-316400 kubelet[1519]: I0603 12:46:03.466467    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-316400" podStartSLOduration=1.4664454550000001 podStartE2EDuration="1.466445455s" podCreationTimestamp="2024-06-03 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:03.412988665 +0000 UTC m=+8.806604211" watchObservedRunningTime="2024-06-03 12:46:03.466445455 +0000 UTC m=+8.860061001"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461035    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461144    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.461126571 +0000 UTC m=+13.854742017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562140    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562216    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562368    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.562318298 +0000 UTC m=+13.955933744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.917749    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935276    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935939    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935856    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497589    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497705    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.497687292 +0000 UTC m=+21.891302738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599269    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.677870   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599402    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599472    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.599454365 +0000 UTC m=+21.993069911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933000    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933553    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:09 multinode-316400 kubelet[1519]: E0603 12:46:09.919522    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.933394    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.934072    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.933530    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.934829    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.920634    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.933278    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.934086    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.577469    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.578411    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.578339881 +0000 UTC m=+37.971955427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.677992    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678127    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678205    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.678184952 +0000 UTC m=+38.071800498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933065    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933791    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.934362    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.935128    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:19 multinode-316400 kubelet[1519]: E0603 12:46:19.922666    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.934372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.935099    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934047    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934767    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.924197    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.933388    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.678868   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.934120    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.934350    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.935369    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.934504    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.935634    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:29 multinode-316400 kubelet[1519]: E0603 12:46:29.925755    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.933950    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.937812    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624555    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624639    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.624619316 +0000 UTC m=+70.018234762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726444    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726516    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.679868   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726576    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.726559662 +0000 UTC m=+70.120175108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:08.680830   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.933519    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.934365    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.841289    1519 scope.go:117] "RemoveContainer" containerID="f3d3a474bbe63a5e0e163d5c7d92c13e3e09cac96cc090c7077e648e1f08c5c7"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.842261    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: E0603 12:46:33.842518    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbd73e44-9a7e-4b5f-93e5-d1621c837baa)\"" pod="kube-system/storage-provisioner" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	I0603 05:47:08.681875   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	I0603 05:47:08.728776   10844 logs.go:123] Gathering logs for kube-apiserver [a9b10f4d479a] ...
	I0603 05:47:08.728776   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9b10f4d479a"
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:57.403757       1 options.go:221] external host was not specified, using 172.17.95.88
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:57.406924       1 server.go:148] Version: v1.30.1
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:57.407254       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.053920       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.058845       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.058955       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.059338       1 instance.go:299] Using reconciler: lease
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.060201       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:58.875148       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:58.875563       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.142148       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.142832       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.377455       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.573170       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.586634       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.586771       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.586784       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.588425       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.588531       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.590497       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.591820       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.591914       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.591924       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.594253       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.594382       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.595963       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.596105       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.596117       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.597347       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.597459       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! W0603 12:45:59.597610       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.598635       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 05:47:08.765977   10844 command_runner.go:130] ! I0603 12:45:59.601013       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601125       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601136       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! I0603 12:45:59.601685       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601835       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.601851       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766525   10844 command_runner.go:130] ! I0603 12:45:59.602906       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 05:47:08.766525   10844 command_runner.go:130] ! W0603 12:45:59.603027       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 05:47:08.766656   10844 command_runner.go:130] ! I0603 12:45:59.605451       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 05:47:08.766768   10844 command_runner.go:130] ! W0603 12:45:59.605590       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766830   10844 command_runner.go:130] ! W0603 12:45:59.605603       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766830   10844 command_runner.go:130] ! I0603 12:45:59.606823       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 05:47:08.766830   10844 command_runner.go:130] ! W0603 12:45:59.607057       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766830   10844 command_runner.go:130] ! W0603 12:45:59.607073       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766896   10844 command_runner.go:130] ! I0603 12:45:59.610997       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 05:47:08.766920   10844 command_runner.go:130] ! W0603 12:45:59.611141       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766949   10844 command_runner.go:130] ! W0603 12:45:59.611153       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.766949   10844 command_runner.go:130] ! I0603 12:45:59.615262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 05:47:08.766949   10844 command_runner.go:130] ! I0603 12:45:59.618444       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 05:47:08.766987   10844 command_runner.go:130] ! W0603 12:45:59.618592       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 05:47:08.766987   10844 command_runner.go:130] ! W0603 12:45:59.618802       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.766987   10844 command_runner.go:130] ! I0603 12:45:59.633959       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 05:47:08.767055   10844 command_runner.go:130] ! W0603 12:45:59.634179       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.634387       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:45:59.641016       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.641203       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.641390       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:45:59.643262       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.643611       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:45:59.665282       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 05:47:08.767107   10844 command_runner.go:130] ! W0603 12:45:59.665339       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321072       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321338       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321510       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.322441       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.324839       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.324963       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325383       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331772       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331819       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331950       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331975       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.331996       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332390       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332464       1 controller.go:139] Starting OpenAPI controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332488       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332501       1 naming_controller.go:291] Starting NamingConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332512       1 establishing_controller.go:76] Starting EstablishingController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332528       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.332550       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.321340       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325911       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.348350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.348672       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325922       1 available_controller.go:423] Starting AvailableConditionController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.350192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325939       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.325949       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.368845       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.368878       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 05:47:08.767107   10844 command_runner.go:130] ! I0603 12:46:00.451943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 05:47:08.767671   10844 command_runner.go:130] ! I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 05:47:08.767861   10844 command_runner.go:130] ! I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 05:47:08.767861   10844 command_runner.go:130] ! I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 05:47:08.767970   10844 command_runner.go:130] ! I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:08.767970   10844 command_runner.go:130] ! I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 05:47:08.768032   10844 command_runner.go:130] ! I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 05:47:08.768050   10844 command_runner.go:130] ! I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 05:47:08.768050   10844 command_runner.go:130] ! I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 05:47:08.768050   10844 command_runner.go:130] ! W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 05:47:08.768105   10844 command_runner.go:130] ! I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 05:47:08.768128   10844 command_runner.go:130] ! I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 05:47:08.768170   10844 command_runner.go:130] ! I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 05:47:08.768170   10844 command_runner.go:130] ! I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 05:47:08.768195   10844 command_runner.go:130] ! I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 05:47:08.768195   10844 command_runner.go:130] ! I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 05:47:08.768195   10844 command_runner.go:130] ! I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 05:47:08.768195   10844 command_runner.go:130] ! W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	I0603 05:47:08.776793   10844 logs.go:123] Gathering logs for etcd [ef3c01484867] ...
	I0603 05:47:08.776793   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3c01484867"
	I0603 05:47:08.805375   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.861568Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:08.805729   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.863054Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.95.88:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.95.88:2380","--initial-cluster=multinode-316400=https://172.17.95.88:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.95.88:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.95.88:2380","--name=multinode-316400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 05:47:08.805729   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.86357Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 05:47:08.805832   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.864546Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:08.805832   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.866457Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.95.88:2380"]}
	I0603 05:47:08.805894   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.867148Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:08.805921   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.884169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"]}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.885995Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-316400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.912835Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"25.475134ms"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.947133Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.990656Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","commit-index":1995}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=()"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became follower at term 2"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2227694153984668 [peers: [], term: 2, commit: 1995, applied: 0, lastindex: 1995, lastterm: 2]"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:57.005826Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.01104Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.018364Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.030883Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042399Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2227694153984668","timeout":"7s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042946Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2227694153984668"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.043072Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2227694153984668","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.046821Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047797Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	I0603 05:47:08.806037   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	I0603 05:47:08.808623   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 05:47:08.809607   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	I0603 05:47:08.820044   10844 logs.go:123] Gathering logs for kube-controller-manager [3d7dc29a5791] ...
	I0603 05:47:08.820044   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d7dc29a5791"
	I0603 05:47:08.859036   10844 command_runner.go:130] ! I0603 12:22:58.709734       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:08.859101   10844 command_runner.go:130] ! I0603 12:22:59.476409       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:08.859101   10844 command_runner.go:130] ! I0603 12:22:59.477144       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:08.859101   10844 command_runner.go:130] ! I0603 12:22:59.479107       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:22:59.479482       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:22:59.480446       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:22:59.480646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:23:03.879622       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:23:03.880293       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:08.859182   10844 command_runner.go:130] ! I0603 12:23:03.880027       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.898013       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.898158       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.898213       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:08.859300   10844 command_runner.go:130] ! I0603 12:23:03.919140       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:08.859365   10844 command_runner.go:130] ! I0603 12:23:03.919340       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:08.859365   10844 command_runner.go:130] ! I0603 12:23:03.919371       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:08.859389   10844 command_runner.go:130] ! I0603 12:23:03.929290       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:08.859417   10844 command_runner.go:130] ! I0603 12:23:03.929541       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:08.859417   10844 command_runner.go:130] ! I0603 12:23:03.981652       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:08.859455   10844 command_runner.go:130] ! I0603 12:23:13.960621       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:08.859494   10844 command_runner.go:130] ! I0603 12:23:13.960663       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:08.859533   10844 command_runner.go:130] ! I0603 12:23:13.960672       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:08.859533   10844 command_runner.go:130] ! I0603 12:23:13.960922       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:08.859591   10844 command_runner.go:130] ! I0603 12:23:13.960933       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:08.859591   10844 command_runner.go:130] ! I0603 12:23:13.982079       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:08.859615   10844 command_runner.go:130] ! I0603 12:23:13.983455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:08.859615   10844 command_runner.go:130] ! I0603 12:23:13.983548       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:08.859615   10844 command_runner.go:130] ! E0603 12:23:14.000699       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:08.859615   10844 command_runner.go:130] ! I0603 12:23:14.000725       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:08.859724   10844 command_runner.go:130] ! I0603 12:23:14.000737       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:08.859741   10844 command_runner.go:130] ! I0603 12:23:14.000744       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:08.859741   10844 command_runner.go:130] ! I0603 12:23:14.014097       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:08.859802   10844 command_runner.go:130] ! I0603 12:23:14.014549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.014579       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.039289       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.039520       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.039555       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.066064       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.066460       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.067547       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.080694       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.080928       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.080942       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.090915       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.091127       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.112300       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.112981       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.113168       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.115290       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.115472       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.115914       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.116287       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.138094       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.138554       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.138571       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.156457       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.157066       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.157201       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.299010       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:08.859854   10844 command_runner.go:130] ! I0603 12:23:14.299494       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:08.860386   10844 command_runner.go:130] ! I0603 12:23:14.299867       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:08.860386   10844 command_runner.go:130] ! I0603 12:23:14.448653       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:08.860386   10844 command_runner.go:130] ! I0603 12:23:14.448790       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.448807       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.598920       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.599459       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.599613       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.747435       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:08.860425   10844 command_runner.go:130] ! I0603 12:23:14.747595       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.747608       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.747617       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.794967       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.795092       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:08.860552   10844 command_runner.go:130] ! I0603 12:23:14.795473       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.795623       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.796055       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.947799       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:08.860617   10844 command_runner.go:130] ! I0603 12:23:14.947966       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:14.948148       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:15.253614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:15.253800       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:08.860690   10844 command_runner.go:130] ! I0603 12:23:15.253851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:08.860746   10844 command_runner.go:130] ! W0603 12:23:15.253890       1 shared_informer.go:597] resyncPeriod 20h27m39.878927139s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:08.860746   10844 command_runner.go:130] ! I0603 12:23:15.254123       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:08.861152   10844 command_runner.go:130] ! I0603 12:23:15.254392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.254514       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.255105       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.255639       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:08.861215   10844 command_runner.go:130] ! I0603 12:23:15.255930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:08.861296   10844 command_runner.go:130] ! I0603 12:23:15.256059       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:08.861296   10844 command_runner.go:130] ! I0603 12:23:15.256381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.256652       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.256978       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.257200       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:08.861363   10844 command_runner.go:130] ! I0603 12:23:15.257574       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:08.861452   10844 command_runner.go:130] ! I0603 12:23:15.257864       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:08.861506   10844 command_runner.go:130] ! I0603 12:23:15.258216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:08.861525   10844 command_runner.go:130] ! W0603 12:23:15.258585       1 shared_informer.go:597] resyncPeriod 18h8m55.919288475s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:08.861525   10844 command_runner.go:130] ! I0603 12:23:15.258823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:08.861581   10844 command_runner.go:130] ! I0603 12:23:15.258977       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:08.861581   10844 command_runner.go:130] ! I0603 12:23:15.259197       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:08.861581   10844 command_runner.go:130] ! I0603 12:23:15.259267       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:08.861641   10844 command_runner.go:130] ! I0603 12:23:15.259531       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.259645       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.259859       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.400049       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:08.861667   10844 command_runner.go:130] ! I0603 12:23:15.400251       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.400362       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.550028       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.550108       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:08.861819   10844 command_runner.go:130] ! I0603 12:23:15.550118       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:08.861902   10844 command_runner.go:130] ! I0603 12:23:15.744039       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:08.861902   10844 command_runner.go:130] ! I0603 12:23:15.744209       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:08.861902   10844 command_runner.go:130] ! I0603 12:23:15.744288       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:08.861961   10844 command_runner.go:130] ! I0603 12:23:15.744381       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:08.861961   10844 command_runner.go:130] ! E0603 12:23:15.795003       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:08.861961   10844 command_runner.go:130] ! I0603 12:23:15.795251       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:08.862042   10844 command_runner.go:130] ! I0603 12:23:15.951102       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:08.862042   10844 command_runner.go:130] ! I0603 12:23:15.951175       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:08.862042   10844 command_runner.go:130] ! I0603 12:23:15.951186       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:08.862097   10844 command_runner.go:130] ! I0603 12:23:16.103214       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:08.862157   10844 command_runner.go:130] ! I0603 12:23:16.103538       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:08.862184   10844 command_runner.go:130] ! I0603 12:23:16.103703       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:08.862184   10844 command_runner.go:130] ! I0603 12:23:16.152626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:08.862236   10844 command_runner.go:130] ! I0603 12:23:16.152712       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:08.862317   10844 command_runner.go:130] ! I0603 12:23:16.153331       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:08.862317   10844 command_runner.go:130] ! I0603 12:23:16.153697       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:08.862317   10844 command_runner.go:130] ! I0603 12:23:16.153983       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:08.862377   10844 command_runner.go:130] ! I0603 12:23:16.154153       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:08.862377   10844 command_runner.go:130] ! I0603 12:23:16.154254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862439   10844 command_runner.go:130] ! I0603 12:23:16.154552       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862439   10844 command_runner.go:130] ! I0603 12:23:16.155315       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:08.862510   10844 command_runner.go:130] ! I0603 12:23:16.155447       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:08.862510   10844 command_runner.go:130] ! I0603 12:23:16.155494       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862510   10844 command_runner.go:130] ! I0603 12:23:16.156193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:08.862575   10844 command_runner.go:130] ! I0603 12:23:16.156626       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:08.862597   10844 command_runner.go:130] ! I0603 12:23:16.156664       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:08.862632   10844 command_runner.go:130] ! I0603 12:23:16.298448       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:08.862632   10844 command_runner.go:130] ! I0603 12:23:16.298743       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:08.862692   10844 command_runner.go:130] ! I0603 12:23:16.298803       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:08.862692   10844 command_runner.go:130] ! I0603 12:23:16.457482       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:08.862692   10844 command_runner.go:130] ! I0603 12:23:16.458106       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:08.862749   10844 command_runner.go:130] ! I0603 12:23:16.458255       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:08.862749   10844 command_runner.go:130] ! I0603 12:23:16.603442       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:08.862773   10844 command_runner.go:130] ! I0603 12:23:16.603819       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:08.862801   10844 command_runner.go:130] ! I0603 12:23:16.603900       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:08.862801   10844 command_runner.go:130] ! I0603 12:23:16.795254       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:08.862837   10844 command_runner.go:130] ! I0603 12:23:16.795875       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:08.862876   10844 command_runner.go:130] ! I0603 12:23:16.948611       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:08.862876   10844 command_runner.go:130] ! I0603 12:23:16.948652       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:08.862922   10844 command_runner.go:130] ! I0603 12:23:16.948726       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:08.862922   10844 command_runner.go:130] ! I0603 12:23:16.949131       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:08.862981   10844 command_runner.go:130] ! I0603 12:23:17.206218       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:08.863005   10844 command_runner.go:130] ! I0603 12:23:17.206341       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:08.863051   10844 command_runner.go:130] ! I0603 12:23:17.206354       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:08.863078   10844 command_runner.go:130] ! I0603 12:23:17.443986       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:08.863138   10844 command_runner.go:130] ! I0603 12:23:17.444026       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:08.863138   10844 command_runner.go:130] ! I0603 12:23:17.444652       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:08.863186   10844 command_runner.go:130] ! I0603 12:23:17.444677       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:08.863186   10844 command_runner.go:130] ! I0603 12:23:17.702103       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:08.863214   10844 command_runner.go:130] ! I0603 12:23:17.702517       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:08.863214   10844 command_runner.go:130] ! I0603 12:23:17.702550       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:08.863410   10844 command_runner.go:130] ! I0603 12:23:17.851156       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:08.863438   10844 command_runner.go:130] ! I0603 12:23:17.851357       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:08.863438   10844 command_runner.go:130] ! I0603 12:23:17.851370       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.000740       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.003147       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.003208       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:08.863485   10844 command_runner.go:130] ! I0603 12:23:18.013736       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:08.863552   10844 command_runner.go:130] ! I0603 12:23:18.042698       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:08.863552   10844 command_runner.go:130] ! I0603 12:23:18.049024       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:08.863613   10844 command_runner.go:130] ! I0603 12:23:18.049393       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:08.863613   10844 command_runner.go:130] ! I0603 12:23:18.049619       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:08.863643   10844 command_runner.go:130] ! I0603 12:23:18.052020       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.052116       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.058451       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.063949       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.063997       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.064022       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.064027       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.064033       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.076606       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.097633       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.097738       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098286       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098375       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.098877       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.100321       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.100587       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.103320       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.103450       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.103468       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.107067       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.108430       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.112806       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.113161       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.114212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400" podCIDRs=["10.244.0.0/24"]
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.114620       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.116662       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.120085       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.129657       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.139133       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.141026       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.152060       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.154508       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.154683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.156204       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:08.863673   10844 command_runner.go:130] ! I0603 12:23:18.157708       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.159229       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.202824       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.204977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:08.864209   10844 command_runner.go:130] ! I0603 12:23:18.213840       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.215208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.245546       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.260135       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.303335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.744986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.745263       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:18.809407       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.424454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="514.197479ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 05:47:08.864249   10844 command_runner.go:130] ! I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:08.864837   10844 command_runner.go:130] ! I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 05:47:08.864927   10844 command_runner.go:130] ! I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:08.864927   10844 command_runner.go:130] ! I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.872210   10844 command_runner.go:130] ! I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:08.890210   10844 logs.go:123] Gathering logs for kindnet [a00a9dc2a937] ...
	I0603 05:47:08.890210   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00a9dc2a937"
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:18.810917       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:18.811413       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:18.811451       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826592       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826645       1 main.go:227] handling current node
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826658       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.826665       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.827203       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.932228   10844 command_runner.go:130] ! I0603 12:32:28.827288       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840141       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840209       1 main.go:227] handling current node
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840223       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.934225   10844 command_runner.go:130] ! I0603 12:32:38.840230       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:38.840630       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:38.840646       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850171       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850276       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850292       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850299       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850729       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:48.850876       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.856606       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.857034       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.857296       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.857510       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.858637       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:32:58.858677       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864801       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864826       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864838       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.864844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.865310       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:08.865474       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872391       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872568       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872599       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872624       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872804       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:18.872959       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886324       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886350       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886362       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886368       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886918       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:28.886985       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.893626       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.893899       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.893916       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.894181       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.894556       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:38.894647       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910837       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910878       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910891       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.910896       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.911015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:48.911041       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926167       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926268       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926284       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.926291       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.927007       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:33:58.927131       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937101       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937131       1 main.go:227] handling current node
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937150       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937284       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:08.937292       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:18.943292       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.935220   10844 command_runner.go:130] ! I0603 12:34:18.943378       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943532       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:18.943590       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950687       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950853       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950870       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.950878       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.951068       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:28.951084       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.965710       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967355       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967377       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967388       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967555       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:38.967566       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.975988       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976117       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976134       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976142       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976817       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:48.976852       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.991312       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.991846       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.991984       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.992011       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.992262       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:34:58.992331       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999119       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999230       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999369       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999483       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999604       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:08.999616       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007514       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007620       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007635       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007642       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007957       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:19.007986       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.013983       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014066       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014081       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014088       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014429       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:29.014444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025261       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025288       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025300       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025306       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025682       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:39.025828       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.038248       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.039013       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.039143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.039662       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.040380       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:49.040438       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052205       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052297       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052328       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052410       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052577       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:35:59.052607       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059926       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059974       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059988       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.059995       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.060515       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:09.060532       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.069521       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.069928       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.070204       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.070309       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.070978       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:19.071168       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084376       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084614       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084689       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.084804       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.085015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:29.085100       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098298       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098419       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098435       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098444       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.098942       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:39.099083       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109724       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109872       1 main.go:227] handling current node
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.109894       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.110382       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:49.110466       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.936221   10844 command_runner.go:130] ! I0603 12:36:59.116904       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117061       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117150       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117281       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117621       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:36:59.117713       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.133187       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.133597       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.133807       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.134149       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.134720       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:09.134902       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141246       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141257       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141386       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:19.141456       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151018       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151126       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151147       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151156       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.151810       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:29.152019       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165415       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165510       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165524       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.165530       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.166173       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:39.166270       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181247       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181371       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181387       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181412       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.181852       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:49.182176       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189418       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189528       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189544       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.189552       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.190394       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:37:59.190480       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197274       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197415       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197432       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197440       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197851       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:09.197933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204632       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204793       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204826       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.204835       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.205144       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:19.205251       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213406       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213503       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213518       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213524       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213644       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:29.213655       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229128       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229187       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229199       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229205       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229332       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:39.229344       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245014       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245069       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245084       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245091       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245355       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:49.245382       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252267       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252359       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252371       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.252376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.260367       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:38:59.260444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270366       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270476       1 main.go:227] handling current node
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270490       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270544       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.270869       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:09.271060       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:19.277515       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.937221   10844 command_runner.go:130] ! I0603 12:39:19.277615       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.277631       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.277638       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.278259       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:19.278516       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287007       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287102       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287117       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287124       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287246       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:29.287329       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293747       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293802       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.293812       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.294185       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:39.294225       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304527       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304629       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304643       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304651       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.304863       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:49.305107       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314751       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314846       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314860       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314866       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.314992       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:39:59.315004       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321649       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321868       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.321895       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.322451       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:09.322470       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336642       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336845       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336864       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.336872       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.337002       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:19.337011       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350352       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350468       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350484       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350493       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.350956       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:29.351085       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366296       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366357       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366370       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366518       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:39.366548       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371036       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371174       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371189       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371218       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371340       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:49.371368       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.386603       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387024       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387122       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387140       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387625       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:40:59.387909       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401524       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401658       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401746       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.401844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.402106       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:09.402238       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408360       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408404       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408417       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408423       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408530       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:19.408541       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414703       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414865       1 main.go:227] handling current node
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.414889       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.415393       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.415619       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:29.415702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:08.938213   10844 command_runner.go:130] ! I0603 12:41:39.426331       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426441       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426455       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426462       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426731       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:39.426795       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436724       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436739       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.436745       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.437162       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:49.437250       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449377       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449801       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:41:59.449916       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464583       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464690       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464705       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.464713       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.465435       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:09.465537       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.473928       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474029       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474044       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474052       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474454       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:19.474552       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480280       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480469       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480606       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.480686       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.481023       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:29.481213       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492462       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492634       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492669       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492711       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.492930       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:08.939215   10844 command_runner.go:130] ! I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:08.956211   10844 logs.go:123] Gathering logs for dmesg ...
	I0603 05:47:08.956211   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 05:47:08.982439   10844 command_runner.go:130] > [Jun 3 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.129332] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.024453] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 05:47:08.982439   10844 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 05:47:08.982604   10844 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 05:47:08.982604   10844 command_runner.go:130] > [  +0.058085] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 05:47:08.982604   10844 command_runner.go:130] > [  +0.021687] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 05:47:08.982604   10844 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 05:47:08.982668   10844 command_runner.go:130] > [Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	I0603 05:47:08.982668   10844 command_runner.go:130] > [  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	I0603 05:47:08.984668   10844 logs.go:123] Gathering logs for coredns [4241e2ff2dfe] ...
	I0603 05:47:08.984668   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4241e2ff2dfe"
	I0603 05:47:09.011241   10844 command_runner.go:130] > .:53
	I0603 05:47:09.011241   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:09.011241   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:09.011241   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:09.011241   10844 command_runner.go:130] > [INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	I0603 05:47:09.011241   10844 logs.go:123] Gathering logs for kube-proxy [09616a16042d] ...
	I0603 05:47:09.011241   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09616a16042d"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:09.039265   10844 command_runner.go:130] ! I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:09.039970   10844 command_runner.go:130] ! I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:09.040068   10844 command_runner.go:130] ! I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 05:47:09.040068   10844 command_runner.go:130] ! I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:09.040126   10844 command_runner.go:130] ! I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:09.040126   10844 command_runner.go:130] ! I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:09.040126   10844 command_runner.go:130] ! I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:09.042230   10844 logs.go:123] Gathering logs for describe nodes ...
	I0603 05:47:09.042230   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 05:47:09.252742   10844 command_runner.go:130] > Name:               multinode-316400
	I0603 05:47:09.252742   10844 command_runner.go:130] > Roles:              control-plane
	I0603 05:47:09.252742   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 05:47:09.252742   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:09.252742   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:09.252742   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	I0603 05:47:09.252742   10844 command_runner.go:130] > Taints:             <none>
	I0603 05:47:09.252742   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:09.253724   10844 command_runner.go:130] > Lease:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400
	I0603 05:47:09.253724   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:09.253724   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:47:02 +0000
	I0603 05:47:09.253724   10844 command_runner.go:130] > Conditions:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 05:47:09.253724   10844 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 05:47:09.253724   10844 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 05:47:09.253724   10844 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	I0603 05:47:09.253724   10844 command_runner.go:130] > Addresses:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   InternalIP:  172.17.95.88
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Hostname:    multinode-316400
	I0603 05:47:09.253724   10844 command_runner.go:130] > Capacity:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.253724   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.253724   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.253724   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.253724   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.253724   10844 command_runner.go:130] > System Info:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Machine ID:                 babca97119de4d6fa999cc452dbf962d
	I0603 05:47:09.253724   10844 command_runner.go:130] >   System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:09.253724   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:09.253724   10844 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 05:47:09.253724   10844 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 05:47:09.253724   10844 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:09.253724   10844 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 05:47:09.253724   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-pm79t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 etcd-multinode-316400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         69s
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kindnet-4hpsl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-316400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-316400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-proxy-ks64x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-316400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 05:47:09.253724   10844 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0603 05:47:09.253724   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:09.253724   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:09.253724   10844 command_runner.go:130] >   Resource           Requests     Limits
	I0603 05:47:09.253724   10844 command_runner.go:130] >   --------           --------     ------
	I0603 05:47:09.253724   10844 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0603 05:47:09.253724   10844 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 05:47:09.254698   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 05:47:09.254698   10844 command_runner.go:130] > Events:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-316400 status is now: NodeReady
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:09.254698   10844 command_runner.go:130] > Name:               multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:09.254698   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:09.254698   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:09.254698   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	I0603 05:47:09.254698   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:09.254698   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:09.254698   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:09.254698   10844 command_runner.go:130] > Lease:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:09.254698   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:47 +0000
	I0603 05:47:09.254698   10844 command_runner.go:130] > Conditions:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:09.254698   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.254698   10844 command_runner.go:130] > Addresses:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   InternalIP:  172.17.94.201
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Hostname:    multinode-316400-m02
	I0603 05:47:09.254698   10844 command_runner.go:130] > Capacity:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.254698   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.254698   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.254698   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.254698   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.254698   10844 command_runner.go:130] > System Info:
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	I0603 05:47:09.254698   10844 command_runner.go:130] >   System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:09.254698   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:09.254698   10844 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 05:47:09.254698   10844 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 05:47:09.254698   10844 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 05:47:09.254698   10844 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:09.254698   10844 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 05:47:09.254698   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-hmxqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:09.255689   10844 command_runner.go:130] >   kube-system                 kindnet-789v5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0603 05:47:09.255689   10844 command_runner.go:130] >   kube-system                 kube-proxy-z26hc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:09.255689   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:09.255689   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:09.255689   10844 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:09.255689   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:09.255689   10844 command_runner.go:130] > Events:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-316400-m02 status is now: NodeNotReady
	I0603 05:47:09.255689   10844 command_runner.go:130] > Name:               multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:09.255689   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:09.255689   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:09.255689   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	I0603 05:47:09.255689   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:09.255689   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:09.255689   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:09.255689   10844 command_runner.go:130] > Lease:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:09.255689   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	I0603 05:47:09.255689   10844 command_runner.go:130] > Conditions:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:09.255689   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:09.255689   10844 command_runner.go:130] > Addresses:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   InternalIP:  172.17.87.60
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Hostname:    multinode-316400-m03
	I0603 05:47:09.255689   10844 command_runner.go:130] > Capacity:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.255689   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.255689   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:09.255689   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:09.255689   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:09.255689   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:09.255689   10844 command_runner.go:130] > System Info:
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	I0603 05:47:09.255689   10844 command_runner.go:130] >   System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:09.255689   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:09.255689   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:09.256698   10844 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 05:47:09.256698   10844 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 05:47:09.256698   10844 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:09.256698   10844 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 05:47:09.256698   10844 command_runner.go:130] >   kube-system                 kindnet-2g66r       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0603 05:47:09.256698   10844 command_runner.go:130] >   kube-system                 kube-proxy-dl97g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0603 05:47:09.256698   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:09.256698   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:09.256698   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:09.256698   10844 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:09.256698   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:09.256698   10844 command_runner.go:130] > Events:
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 05:47:09.256698   10844 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  Starting                 5m38s                  kube-proxy       
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeReady                5m33s                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  NodeNotReady             3m56s                  node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	I0603 05:47:09.256698   10844 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:09.267716   10844 logs.go:123] Gathering logs for coredns [8280b3904678] ...
	I0603 05:47:09.267716   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8280b3904678"
	I0603 05:47:09.301978   10844 command_runner.go:130] > .:53
	I0603 05:47:09.301978   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:09.301978   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:09.302102   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:09.302102   10844 command_runner.go:130] > [INFO] 127.0.0.1:42160 - 49231 "HINFO IN 7758649785632377755.6167658315586765337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046714522s
	I0603 05:47:09.302102   10844 command_runner.go:130] > [INFO] 10.244.1.2:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279598s
	I0603 05:47:09.302102   10844 command_runner.go:130] > [INFO] 10.244.1.2:58454 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208411566s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.1.2:41741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.13626297s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.1.2:34878 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.105138942s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.0.3:55537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268797s
	I0603 05:47:09.302184   10844 command_runner.go:130] > [INFO] 10.244.0.3:46426 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000881s
	I0603 05:47:09.302262   10844 command_runner.go:130] > [INFO] 10.244.0.3:52879 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174998s
	I0603 05:47:09.302357   10844 command_runner.go:130] > [INFO] 10.244.0.3:43420 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000100699s
	I0603 05:47:09.302408   10844 command_runner.go:130] > [INFO] 10.244.1.2:58392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115599s
	I0603 05:47:09.302427   10844 command_runner.go:130] > [INFO] 10.244.1.2:44885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024455563s
	I0603 05:47:09.302427   10844 command_runner.go:130] > [INFO] 10.244.1.2:42255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000337996s
	I0603 05:47:09.302493   10844 command_runner.go:130] > [INFO] 10.244.1.2:41386 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245097s
	I0603 05:47:09.302493   10844 command_runner.go:130] > [INFO] 10.244.1.2:55181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012426179s
	I0603 05:47:09.302493   10844 command_runner.go:130] > [INFO] 10.244.1.2:35256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164099s
	I0603 05:47:09.302564   10844 command_runner.go:130] > [INFO] 10.244.1.2:57960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110199s
	I0603 05:47:09.302564   10844 command_runner.go:130] > [INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	I0603 05:47:09.302655   10844 command_runner.go:130] > [INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	I0603 05:47:09.302689   10844 command_runner.go:130] > [INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	I0603 05:47:09.302689   10844 command_runner.go:130] > [INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	I0603 05:47:09.302744   10844 command_runner.go:130] > [INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 05:47:09.302793   10844 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 05:47:09.306115   10844 logs.go:123] Gathering logs for Docker ...
	I0603 05:47:09.306115   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:09.341369   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.771561443Z" level=info msg="Starting up"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.772532063Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.773624286Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.808811320Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832632417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832678118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832736520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832759220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833244930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833408234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833576137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833613138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833628938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833638438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.834164449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.835025267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838417938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838538341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838679444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838769945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839497061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839606563Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839624563Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845634889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845777492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:09.342356   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845800892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845816092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845839393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845906994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846346204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846529007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846620809Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846640810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846654910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846667810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846680811Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846694511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846708411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846721811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846733912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846744912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846773112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846788913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846800513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846828814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846839914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846851514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846862614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846874615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846886615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846899615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846955316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846981817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846994617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847010117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847031418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847043818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847054818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847167021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847253922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847272023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847284523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847328424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847344024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847358325Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847619130Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847749533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847791734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847827434Z" level=info msg="containerd successfully booted in 0.041960s"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:18 multinode-316400 dockerd[653]: time="2024-06-03T12:45:18.826654226Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.061854651Z" level=info msg="Loading containers: start."
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.457966557Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.535734595Z" level=info msg="Loading containers: done."
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.564526187Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.565436112Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624671041Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624909048Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.830891929Z" level=info msg="Processing signal 'terminated'"
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 systemd[1]: Stopping Docker Application Container Engine...
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.834353661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 05:47:09.343375   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835003667Z" level=info msg="Daemon shutdown complete"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835050568Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835251069Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: docker.service: Deactivated successfully.
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Stopped Docker Application Container Engine.
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.915575270Z" level=info msg="Starting up"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.916682280Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.918008093Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1054
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.949666883Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972231590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972400191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972438091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972452692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972476692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972488892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972615793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972703794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972759294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972796595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972955396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975272817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975362818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975484219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975568720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975596620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975613521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975624221Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975878823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976092925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976118125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976134225Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976151125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976547129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976675630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976808532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976873932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976891332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976903432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976914332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976926833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976940833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976953033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976964333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976974233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977000233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.344366   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977014733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977026033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977037834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977048934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977060334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977071734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977082834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977094934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977108234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977131234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977142235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977155935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977174635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977186435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977200035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977321036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977450137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977475038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977491338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977502538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977515638Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977525838Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977793041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977944442Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977993342Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.978082843Z" level=info msg="containerd successfully booted in 0.029905s"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.958072125Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.992700342Z" level=info msg="Loading containers: start."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.284992921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.371138910Z" level=info msg="Loading containers: done."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397139049Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397280650Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.446056397Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.451246244Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Loaded network plugin cni"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 05:47:09.345396   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4hrc6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e\""
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-pm79t_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4\""
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729841851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729937752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.730811260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.732365774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831787585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831902586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831956587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.832202689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912447024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912562925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912807128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31bce861be7b718722ced8a5abaaaf80e01691edf1873a82a8467609ec04d725/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948298553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948519555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948541855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948688056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5938c827a45b5720a54e096dfe79ff973a8724c39f2dfa24cf2bc4e1f3a14c6e/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211361864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211466465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211486965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211585266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.402470615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403083421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403253922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.410900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474478075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.475925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486666687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486786488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486800688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.487211092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566084538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566367341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566479442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.567551052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.346365   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.582198686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586189923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586494625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.587318633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636541684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636617385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636635485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636992688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.129414501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130210008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130291809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130470711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147517467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147958771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148118573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148818379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423300695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423802099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.424025901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.427457533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1048]: time="2024-06-03T12:46:32.704571107Z" level=info msg="ignoring event" container=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705364020Z" level=info msg="shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705622124Z" level=warning msg="cleaning up after shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705874328Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.728397491Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129026230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129403835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129427335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129696138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309701115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309935818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309957118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.310113120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316797286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316993688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317155090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317526994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899305562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899391863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899429263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899555364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.347377   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.936994844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937073745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937090545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937338347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.348366   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:09.378571   10844 logs.go:123] Gathering logs for container status ...
	I0603 05:47:09.378571   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 05:47:09.443818   10844 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 05:47:09.443818   10844 command_runner.go:130] > c57e529e14789       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	I0603 05:47:09.443818   10844 command_runner.go:130] > 4241e2ff2dfe8       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:09.443818   10844 command_runner.go:130] > e1365acc9d8f5       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:09.443818   10844 command_runner.go:130] > 3a08a76e2a79b       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	I0603 05:47:09.443818   10844 command_runner.go:130] > eeba3616d7005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:09.443818   10844 command_runner.go:130] > 09616a16042d3       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	I0603 05:47:09.443818   10844 command_runner.go:130] > a9b10f4d479ac       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > ef3c014848675       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > 334bb0174b55e       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > cbaa09a85a643       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	I0603 05:47:09.443818   10844 command_runner.go:130] > ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	I0603 05:47:09.443818   10844 command_runner.go:130] > 8280b39046781       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:09.443818   10844 command_runner.go:130] > a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	I0603 05:47:09.443818   10844 command_runner.go:130] > ad08c7b8f3aff       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	I0603 05:47:09.443818   10844 command_runner.go:130] > f39be6db7a1f8       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	I0603 05:47:09.444348   10844 command_runner.go:130] > 3d7dc29a57912       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	I0603 05:47:09.446243   10844 logs.go:123] Gathering logs for kube-scheduler [334bb0174b55] ...
	I0603 05:47:09.446774   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 334bb0174b55"
	I0603 05:47:09.477006   10844 command_runner.go:130] ! I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:09.477006   10844 command_runner.go:130] ! W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:09.478017   10844 command_runner.go:130] ! W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:09.478017   10844 command_runner.go:130] ! W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:09.478017   10844 command_runner.go:130] ! W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:09.478089   10844 command_runner.go:130] ! I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:09.478242   10844 command_runner.go:130] ! I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.478370   10844 command_runner.go:130] ! I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:09.478456   10844 command_runner.go:130] ! I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:09.478520   10844 command_runner.go:130] ! I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:09.478520   10844 command_runner.go:130] ! I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:09.478589   10844 command_runner.go:130] ! I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:09.480228   10844 logs.go:123] Gathering logs for kube-proxy [ad08c7b8f3af] ...
	I0603 05:47:09.480787   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad08c7b8f3af"
	I0603 05:47:09.512305   10844 command_runner.go:130] ! I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:09.512305   10844 command_runner.go:130] ! I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:09.513102   10844 command_runner.go:130] ! I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:09.515098   10844 logs.go:123] Gathering logs for kube-controller-manager [cbaa09a85a64] ...
	I0603 05:47:09.515098   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa09a85a64"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:57.870752       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.526588       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.526702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.533907       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.534542       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.535842       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:45:58.536233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.398949       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.399900       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435010       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435076       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.435752       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.494257       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.494484       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:09.544218   10844 command_runner.go:130] ! I0603 12:46:02.501595       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:09.545117   10844 command_runner.go:130] ! E0603 12:46:02.503053       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.503101       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.506314       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.511488       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.511970       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.516592       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.520190       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.521481       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.521500       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.522419       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.522531       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.522539       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.527263       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.527284       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.528477       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.528534       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.528980       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.529023       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.529029       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:09.545117   10844 command_runner.go:130] ! I0603 12:46:02.532164       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.532658       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.532787       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.537982       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.538156       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.540497       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.545135       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.545508       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.546501       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.548466       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.551407       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.551542       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552105       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552249       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552280       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.552956       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.564031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.564743       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.565277       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.565424       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.571139       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.571233       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.572399       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.572466       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.573181       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.573205       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.574887       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.582200       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.582364       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.582373       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.588602       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:02.591240       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.612297       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.612483       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.613381       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.623612       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.628478       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:09.546105   10844 command_runner.go:130] ! I0603 12:46:12.628951       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.629235       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.652905       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.652988       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:09.547113   10844 command_runner.go:130] ! I0603 12:46:12.653246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673155       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673508       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.673789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.674494       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.674611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.674812       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675397       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675422       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675675       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.675905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676018       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676230       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676428       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676474       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676746       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676879       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.676991       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.677057       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.677159       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.677261       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.679809       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.680265       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.680400       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.696376       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.697035       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.697121       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.699870       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.700035       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:09.548105   10844 command_runner.go:130] ! I0603 12:46:12.700365       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.707376       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.708196       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.708250       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.715601       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.716125       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.716429       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.725280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.725365       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.726123       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.734528       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.734935       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.735117       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.737491       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.737773       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.737858       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743270       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743591       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743640       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.743648       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748185       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748266       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748498       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748532       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.748553       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749140       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749625       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749663       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.749897       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.750105       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.750568       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.753301       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.753662       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.753804       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.754382       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.754576       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.757083       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.757524       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.758174       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.760247       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.760686       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.760938       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.772698       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.772922       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.774148       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:09.549116   10844 command_runner.go:130] ! E0603 12:46:12.775996       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.776034       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.779294       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.779452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.780268       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783043       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783634       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783847       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.783962       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.792655       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.801373       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.817303       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.821609       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.829238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.549116   10844 command_runner.go:130] ! I0603 12:46:12.832397       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.832809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833264       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833561       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.833878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.835226       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.840542       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.846790       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.849319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.849497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.851129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.851147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.852109       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.854406       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.854923       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.867259       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.873525       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.874696       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.876061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.880612       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.880650       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.884270       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.896673       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.897786       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.909588       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.922202       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.923485       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.923685       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924158       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924516       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924851       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.924952       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.928113       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.929667       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.959523       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:12.963250       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.029808       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.030293       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.038277       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.044424       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.064984       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.077763       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.083477       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.093778       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.100897       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.133780       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.164944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.004317ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.168328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.004µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.172600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="212.304157ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.173022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.001µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.502035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.535943       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 05:47:09.550100   10844 command_runner.go:130] ! I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 05:47:12.092494   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:47:12.122524   10844 command_runner.go:130] > 1862
	I0603 05:47:12.122524   10844 api_server.go:72] duration metric: took 1m6.8766895s to wait for apiserver process to appear ...
	I0603 05:47:12.122524   10844 api_server.go:88] waiting for apiserver healthz status ...
	I0603 05:47:12.132404   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 05:47:12.155289   10844 command_runner.go:130] > a9b10f4d479a
	I0603 05:47:12.155539   10844 logs.go:276] 1 containers: [a9b10f4d479a]
	I0603 05:47:12.165042   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 05:47:12.187934   10844 command_runner.go:130] > ef3c01484867
	I0603 05:47:12.188577   10844 logs.go:276] 1 containers: [ef3c01484867]
	I0603 05:47:12.198275   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 05:47:12.220955   10844 command_runner.go:130] > 4241e2ff2dfe
	I0603 05:47:12.220955   10844 command_runner.go:130] > 8280b3904678
	I0603 05:47:12.222717   10844 logs.go:276] 2 containers: [4241e2ff2dfe 8280b3904678]
	I0603 05:47:12.231853   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 05:47:12.257031   10844 command_runner.go:130] > 334bb0174b55
	I0603 05:47:12.257031   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:47:12.257724   10844 logs.go:276] 2 containers: [334bb0174b55 f39be6db7a1f]
	I0603 05:47:12.267515   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 05:47:12.291045   10844 command_runner.go:130] > 09616a16042d
	I0603 05:47:12.291045   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:47:12.292036   10844 logs.go:276] 2 containers: [09616a16042d ad08c7b8f3af]
	I0603 05:47:12.301908   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 05:47:12.326589   10844 command_runner.go:130] > cbaa09a85a64
	I0603 05:47:12.326589   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:47:12.326589   10844 logs.go:276] 2 containers: [cbaa09a85a64 3d7dc29a5791]
	I0603 05:47:12.336708   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 05:47:12.361999   10844 command_runner.go:130] > 3a08a76e2a79
	I0603 05:47:12.361999   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:47:12.363050   10844 logs.go:276] 2 containers: [3a08a76e2a79 a00a9dc2a937]
	I0603 05:47:12.363087   10844 logs.go:123] Gathering logs for Docker ...
	I0603 05:47:12.363153   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.401303   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:12.401848   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:12.401883   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.401883   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401910   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 05:47:12.401910   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.401910   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.401971   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.401999   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.401999   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.401999   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:12.402061   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 05:47:12.402104   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.402166   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 05:47:12.402166   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:12.402234   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.402234   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.771561443Z" level=info msg="Starting up"
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.772532063Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.773624286Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.808811320Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832632417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832678118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832736520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832759220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833244930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833408234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833576137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833613138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833628938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833638438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.834164449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.835025267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838417938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838538341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838679444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838769945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:12.402288   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839497061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839606563Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839624563Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845634889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845777492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:12.402821   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845800892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:12.402936   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845816092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:12.402974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845839393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:12.402974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845906994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:12.403008   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846346204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.403008   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846529007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846620809Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846640810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846654910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403079   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846667810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403150   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846680811Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403150   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846694511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403150   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846708411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403220   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846721811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403220   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846733912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403220   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846744912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.403320   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846773112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846788913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846800513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846828814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403349   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846839914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846851514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846862614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846874615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403430   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846886615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403511   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846899615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403511   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846955316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403511   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846981817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403570   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846994617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403591   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847010117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:12.403591   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847031418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403654   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847043818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403654   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847054818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:12.403716   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847167021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:12.403740   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847253922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:12.403740   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847272023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:12.403818   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847284523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:12.403818   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847328424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.403872   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847344024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:12.403897   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847358325Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:12.403897   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847619130Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:12.403897   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847749533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:12.403974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847791734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:12.403974   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847827434Z" level=info msg="containerd successfully booted in 0.041960s"
	I0603 05:47:12.403974   10844 command_runner.go:130] > Jun 03 12:45:18 multinode-316400 dockerd[653]: time="2024-06-03T12:45:18.826654226Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:12.404027   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.061854651Z" level=info msg="Loading containers: start."
	I0603 05:47:12.404052   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.457966557Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:12.404052   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.535734595Z" level=info msg="Loading containers: done."
	I0603 05:47:12.404052   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.564526187Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:12.404110   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.565436112Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:12.404110   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624671041Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:12.404110   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624909048Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:12.404198   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:12.404198   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.830891929Z" level=info msg="Processing signal 'terminated'"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 systemd[1]: Stopping Docker Application Container Engine...
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.834353661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835003667Z" level=info msg="Daemon shutdown complete"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835050568Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835251069Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: docker.service: Deactivated successfully.
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Stopped Docker Application Container Engine.
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.915575270Z" level=info msg="Starting up"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.916682280Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.918008093Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1054
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.949666883Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972231590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972400191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972438091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972452692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972476692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972488892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972615793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972703794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972759294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972796595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972955396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404227   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975272817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404763   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975362818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:12.404763   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975484219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:12.404763   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975568720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:12.404879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975596620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:12.404879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975613521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:12.404910   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975624221Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975878823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976092925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976118125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:12.404950   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976134225Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:12.405028   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976151125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:12.405028   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:12.405028   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976547129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.405100   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976675630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:12.405100   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976808532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:12.405100   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976873932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:12.405169   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976891332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405169   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976903432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405169   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976914332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976926833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976940833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976953033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405257   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976964333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405368   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976974233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:12.405368   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977000233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405395   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977014733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405395   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977026033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405455   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977037834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405455   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977048934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405455   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977060334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405521   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977071734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405521   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977082834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405521   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977094934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405583   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977108234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405608   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405608   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977131234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405660   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977142235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405685   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977155935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:12.405685   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977174635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405737   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977186435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405762   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977200035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:12.405762   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977321036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:12.405830   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977450137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:12.405830   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977475038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:12.405830   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977491338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:12.405898   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977502538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:12.405987   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977515638Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:12.406044   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977525838Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:12.406069   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977793041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977944442Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977993342Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.978082843Z" level=info msg="containerd successfully booted in 0.029905s"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.958072125Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.992700342Z" level=info msg="Loading containers: start."
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.284992921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.371138910Z" level=info msg="Loading containers: done."
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397139049Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397280650Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.446056397Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.451246244Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Loaded network plugin cni"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4hrc6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e\""
	I0603 05:47:12.406097   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-pm79t_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4\""
	I0603 05:47:12.406628   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729841851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.406628   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729937752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.406628   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.730811260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406707   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.732365774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406804   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831787585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.406845   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831902586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.406845   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831956587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406950   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.832202689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.406980   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912447024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407169   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407204   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912562925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407226   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912807128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407261   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31bce861be7b718722ced8a5abaaaf80e01691edf1873a82a8467609ec04d725/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407261   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948298553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407326   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948519555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948541855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948688056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5938c827a45b5720a54e096dfe79ff973a8724c39f2dfa24cf2bc4e1f3a14c6e/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211361864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211466465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211486965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211585266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.402470615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403083421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403253922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.410900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474478075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.475925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486666687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486786488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407357   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486800688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407896   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.487211092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407936   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566084538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566367341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566479442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.567551052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.582198686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586189923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586494625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.587318633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636541684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636617385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636635485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636992688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.129414501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130210008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130291809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130470711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147517467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147958771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148118573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148818379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.407964   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.408547   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423300695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408547   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423802099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408598   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.424025901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408598   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.427457533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408658   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1048]: time="2024-06-03T12:46:32.704571107Z" level=info msg="ignoring event" container=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 05:47:12.408695   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705364020Z" level=info msg="shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:12.408717   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705622124Z" level=warning msg="cleaning up after shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:12.408717   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705874328Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 05:47:12.408776   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.728397491Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0603 05:47:12.408776   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129026230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408817   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129403835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408817   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129427335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129696138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309701115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309935818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309957118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.310113120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316797286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316993688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317155090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317526994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899305562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899391863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899429263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899555364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.936994844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937073745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937090545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937338347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.408869   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409459   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409509   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409509   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.409561   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:12.442628   10844 logs.go:123] Gathering logs for container status ...
	I0603 05:47:12.442628   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 05:47:12.523948   10844 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 05:47:12.523948   10844 command_runner.go:130] > c57e529e14789       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	I0603 05:47:12.524640   10844 command_runner.go:130] > 4241e2ff2dfe8       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:12.524692   10844 command_runner.go:130] > e1365acc9d8f5       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:12.524692   10844 command_runner.go:130] > 3a08a76e2a79b       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	I0603 05:47:12.524731   10844 command_runner.go:130] > eeba3616d7005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:12.524793   10844 command_runner.go:130] > 09616a16042d3       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	I0603 05:47:12.524793   10844 command_runner.go:130] > a9b10f4d479ac       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > ef3c014848675       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > 334bb0174b55e       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > cbaa09a85a643       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	I0603 05:47:12.524793   10844 command_runner.go:130] > 8280b39046781       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:12.524793   10844 command_runner.go:130] > a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	I0603 05:47:12.524793   10844 command_runner.go:130] > ad08c7b8f3aff       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	I0603 05:47:12.524793   10844 command_runner.go:130] > f39be6db7a1f8       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	I0603 05:47:12.524793   10844 command_runner.go:130] > 3d7dc29a57912       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	I0603 05:47:12.527919   10844 logs.go:123] Gathering logs for coredns [4241e2ff2dfe] ...
	I0603 05:47:12.528034   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4241e2ff2dfe"
	I0603 05:47:12.557382   10844 command_runner.go:130] > .:53
	I0603 05:47:12.557382   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:12.557382   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:12.558387   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:12.558387   10844 command_runner.go:130] > [INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	I0603 05:47:12.559859   10844 logs.go:123] Gathering logs for kube-proxy [09616a16042d] ...
	I0603 05:47:12.559940   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09616a16042d"
	I0603 05:47:12.594122   10844 command_runner.go:130] ! I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:12.594445   10844 command_runner.go:130] ! I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:12.594553   10844 command_runner.go:130] ! I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:12.594553   10844 command_runner.go:130] ! I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:12.594599   10844 command_runner.go:130] ! I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.594599   10844 command_runner.go:130] ! I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 05:47:12.594625   10844 command_runner.go:130] ! I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:12.594625   10844 command_runner.go:130] ! I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:12.594625   10844 command_runner.go:130] ! I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:12.594696   10844 command_runner.go:130] ! I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:12.597435   10844 logs.go:123] Gathering logs for kube-controller-manager [3d7dc29a5791] ...
	I0603 05:47:12.597541   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d7dc29a5791"
	I0603 05:47:12.629115   10844 command_runner.go:130] ! I0603 12:22:58.709734       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:12.629335   10844 command_runner.go:130] ! I0603 12:22:59.476409       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:12.629335   10844 command_runner.go:130] ! I0603 12:22:59.477144       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.629384   10844 command_runner.go:130] ! I0603 12:22:59.479107       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.629384   10844 command_runner.go:130] ! I0603 12:22:59.479482       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.629418   10844 command_runner.go:130] ! I0603 12:22:59.480446       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:12.629418   10844 command_runner.go:130] ! I0603 12:22:59.480646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.879622       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.880293       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.880027       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.898013       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.898158       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.898213       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.919140       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.919340       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.919371       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.929290       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.929541       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:03.981652       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960621       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960663       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960672       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960922       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.960933       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.982079       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.983455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:13.983548       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:12.629447   10844 command_runner.go:130] ! E0603 12:23:14.000699       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.000725       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.000737       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.000744       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.014097       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.014549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.014579       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.039289       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.039520       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.039555       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.066064       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.066460       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.067547       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:12.629447   10844 command_runner.go:130] ! I0603 12:23:14.080694       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:12.629986   10844 command_runner.go:130] ! I0603 12:23:14.080928       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:12.629986   10844 command_runner.go:130] ! I0603 12:23:14.080942       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:12.630027   10844 command_runner.go:130] ! I0603 12:23:14.090915       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.630027   10844 command_runner.go:130] ! I0603 12:23:14.091127       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.630027   10844 command_runner.go:130] ! I0603 12:23:14.112300       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:12.630111   10844 command_runner.go:130] ! I0603 12:23:14.112981       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:12.630111   10844 command_runner.go:130] ! I0603 12:23:14.113168       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:12.630111   10844 command_runner.go:130] ! I0603 12:23:14.115290       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:12.630145   10844 command_runner.go:130] ! I0603 12:23:14.115472       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.115914       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.116287       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.138094       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.138554       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.138571       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.156457       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.157066       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.157201       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.299010       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.299494       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.299867       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.448653       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.448790       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.448807       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.598920       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.599459       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.599613       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747435       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747595       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747608       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.747617       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.794967       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.795092       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.795473       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.795623       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.796055       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.947799       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.947966       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:14.948148       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:12.630176   10844 command_runner.go:130] ! I0603 12:23:15.253614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:12.630709   10844 command_runner.go:130] ! I0603 12:23:15.253800       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:12.630709   10844 command_runner.go:130] ! I0603 12:23:15.253851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:12.630709   10844 command_runner.go:130] ! W0603 12:23:15.253890       1 shared_informer.go:597] resyncPeriod 20h27m39.878927139s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:12.630773   10844 command_runner.go:130] ! I0603 12:23:15.254123       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:12.630773   10844 command_runner.go:130] ! I0603 12:23:15.254392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:12.630773   10844 command_runner.go:130] ! I0603 12:23:15.254514       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:12.630845   10844 command_runner.go:130] ! I0603 12:23:15.255105       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:12.630845   10844 command_runner.go:130] ! I0603 12:23:15.255639       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:12.630893   10844 command_runner.go:130] ! I0603 12:23:15.255930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:12.630893   10844 command_runner.go:130] ! I0603 12:23:15.256059       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:12.630893   10844 command_runner.go:130] ! I0603 12:23:15.256381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:12.630972   10844 command_runner.go:130] ! I0603 12:23:15.256652       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:12.630972   10844 command_runner.go:130] ! I0603 12:23:15.256978       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:12.631019   10844 command_runner.go:130] ! I0603 12:23:15.257200       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.257574       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.257864       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.258216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! W0603 12:23:15.258585       1 shared_informer.go:597] resyncPeriod 18h8m55.919288475s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.258823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.258977       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259197       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259267       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259531       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259645       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.259859       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.400049       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.400251       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.400362       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.550028       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.550108       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.550118       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744039       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744209       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744288       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.744381       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:12.631050   10844 command_runner.go:130] ! E0603 12:23:15.795003       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.795251       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.951102       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.951175       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:15.951186       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:16.103214       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:12.631050   10844 command_runner.go:130] ! I0603 12:23:16.103538       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:12.631611   10844 command_runner.go:130] ! I0603 12:23:16.103703       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:12.631611   10844 command_runner.go:130] ! I0603 12:23:16.152626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:12.631611   10844 command_runner.go:130] ! I0603 12:23:16.152712       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:12.631692   10844 command_runner.go:130] ! I0603 12:23:16.153331       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:12.631785   10844 command_runner.go:130] ! I0603 12:23:16.153697       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:12.631785   10844 command_runner.go:130] ! I0603 12:23:16.153983       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:12.631814   10844 command_runner.go:130] ! I0603 12:23:16.154153       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.631851   10844 command_runner.go:130] ! I0603 12:23:16.154254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631851   10844 command_runner.go:130] ! I0603 12:23:16.154552       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631890   10844 command_runner.go:130] ! I0603 12:23:16.155315       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:12.631954   10844 command_runner.go:130] ! I0603 12:23:16.155447       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.631954   10844 command_runner.go:130] ! I0603 12:23:16.155494       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631954   10844 command_runner.go:130] ! I0603 12:23:16.156193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.631991   10844 command_runner.go:130] ! I0603 12:23:16.156626       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:12.632034   10844 command_runner.go:130] ! I0603 12:23:16.156664       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.632034   10844 command_runner.go:130] ! I0603 12:23:16.298448       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:12.632034   10844 command_runner.go:130] ! I0603 12:23:16.298743       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:12.632087   10844 command_runner.go:130] ! I0603 12:23:16.298803       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:12.632087   10844 command_runner.go:130] ! I0603 12:23:16.457482       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.632129   10844 command_runner.go:130] ! I0603 12:23:16.458106       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.632129   10844 command_runner.go:130] ! I0603 12:23:16.458255       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:12.632129   10844 command_runner.go:130] ! I0603 12:23:16.603442       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:12.632165   10844 command_runner.go:130] ! I0603 12:23:16.603819       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:12.632165   10844 command_runner.go:130] ! I0603 12:23:16.603900       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:12.632165   10844 command_runner.go:130] ! I0603 12:23:16.795254       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:12.632212   10844 command_runner.go:130] ! I0603 12:23:16.795875       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:12.632248   10844 command_runner.go:130] ! I0603 12:23:16.948611       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:12.632248   10844 command_runner.go:130] ! I0603 12:23:16.948652       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:12.632248   10844 command_runner.go:130] ! I0603 12:23:16.948726       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:12.632296   10844 command_runner.go:130] ! I0603 12:23:16.949131       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.206218       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.206341       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.206354       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:12.632327   10844 command_runner.go:130] ! I0603 12:23:17.443986       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:12.632399   10844 command_runner.go:130] ! I0603 12:23:17.444026       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:12.632399   10844 command_runner.go:130] ! I0603 12:23:17.444652       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.632437   10844 command_runner.go:130] ! I0603 12:23:17.444677       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:12.632437   10844 command_runner.go:130] ! I0603 12:23:17.702103       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:12.632478   10844 command_runner.go:130] ! I0603 12:23:17.702517       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:12.632478   10844 command_runner.go:130] ! I0603 12:23:17.702550       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:12.632478   10844 command_runner.go:130] ! I0603 12:23:17.851156       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:12.632516   10844 command_runner.go:130] ! I0603 12:23:17.851357       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:12.632556   10844 command_runner.go:130] ! I0603 12:23:17.851370       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:12.632556   10844 command_runner.go:130] ! I0603 12:23:18.000740       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:12.632556   10844 command_runner.go:130] ! I0603 12:23:18.003147       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:12.632594   10844 command_runner.go:130] ! I0603 12:23:18.003208       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:12.632628   10844 command_runner.go:130] ! I0603 12:23:18.013736       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.632665   10844 command_runner.go:130] ! I0603 12:23:18.042698       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:12.632665   10844 command_runner.go:130] ! I0603 12:23:18.049024       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:12.632700   10844 command_runner.go:130] ! I0603 12:23:18.049393       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:12.632700   10844 command_runner.go:130] ! I0603 12:23:18.049619       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:12.632700   10844 command_runner.go:130] ! I0603 12:23:18.052020       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:12.632737   10844 command_runner.go:130] ! I0603 12:23:18.052116       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:12.632737   10844 command_runner.go:130] ! I0603 12:23:18.058451       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:12.632785   10844 command_runner.go:130] ! I0603 12:23:18.063949       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:12.632785   10844 command_runner.go:130] ! I0603 12:23:18.063997       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:12.632822   10844 command_runner.go:130] ! I0603 12:23:18.064022       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:12.632822   10844 command_runner.go:130] ! I0603 12:23:18.064027       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:12.632822   10844 command_runner.go:130] ! I0603 12:23:18.064033       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:12.632870   10844 command_runner.go:130] ! I0603 12:23:18.076606       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:12.632870   10844 command_runner.go:130] ! I0603 12:23:18.097633       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:12.632870   10844 command_runner.go:130] ! I0603 12:23:18.097738       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:12.632907   10844 command_runner.go:130] ! I0603 12:23:18.098210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:12.632907   10844 command_runner.go:130] ! I0603 12:23:18.098286       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:12.632947   10844 command_runner.go:130] ! I0603 12:23:18.098375       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:12.632947   10844 command_runner.go:130] ! I0603 12:23:18.098877       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:12.633004   10844 command_runner.go:130] ! I0603 12:23:18.100321       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:12.633004   10844 command_runner.go:130] ! I0603 12:23:18.100587       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:12.633039   10844 command_runner.go:130] ! I0603 12:23:18.103320       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:12.633039   10844 command_runner.go:130] ! I0603 12:23:18.103450       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.103468       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.107067       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.108430       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.112806       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.113161       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.114212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400" podCIDRs=["10.244.0.0/24"]
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.114620       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.116662       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.120085       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.129657       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.139133       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.141026       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.152060       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.154508       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.154683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.156204       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.157708       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.159229       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.202824       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.204977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.213840       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.215208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.245546       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.260135       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.303335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.744986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.745263       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:18.809407       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.424454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="514.197479ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 05:47:12.633078   10844 command_runner.go:130] ! I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 05:47:12.633629   10844 command_runner.go:130] ! I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 05:47:12.633629   10844 command_runner.go:130] ! I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 05:47:12.633675   10844 command_runner.go:130] ! I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 05:47:12.633675   10844 command_runner.go:130] ! I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 05:47:12.633675   10844 command_runner.go:130] ! I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:12.633746   10844 command_runner.go:130] ! I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:12.633783   10844 command_runner.go:130] ! I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 05:47:12.633820   10844 command_runner.go:130] ! I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:12.633820   10844 command_runner.go:130] ! I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.633860   10844 command_runner.go:130] ! I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 05:47:12.633860   10844 command_runner.go:130] ! I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 05:47:12.633897   10844 command_runner.go:130] ! I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 05:47:12.633940   10844 command_runner.go:130] ! I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 05:47:12.633940   10844 command_runner.go:130] ! I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 05:47:12.633986   10844 command_runner.go:130] ! I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 05:47:12.633986   10844 command_runner.go:130] ! I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 05:47:12.634024   10844 command_runner.go:130] ! I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 05:47:12.634060   10844 command_runner.go:130] ! I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 05:47:12.634060   10844 command_runner.go:130] ! I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634187   10844 command_runner.go:130] ! I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:12.634267   10844 command_runner.go:130] ! I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 05:47:12.634349   10844 command_runner.go:130] ! I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:12.634349   10844 command_runner.go:130] ! I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634349   10844 command_runner.go:130] ! I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634405   10844 command_runner.go:130] ! I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634405   10844 command_runner.go:130] ! I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.634475   10844 command_runner.go:130] ! I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.653414   10844 logs.go:123] Gathering logs for kindnet [a00a9dc2a937] ...
	I0603 05:47:12.653414   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00a9dc2a937"
	I0603 05:47:12.682333   10844 command_runner.go:130] ! I0603 12:32:18.810917       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:18.811413       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:18.811451       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826592       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826645       1 main.go:227] handling current node
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826658       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.826665       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.682801   10844 command_runner.go:130] ! I0603 12:32:28.827203       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.682959   10844 command_runner.go:130] ! I0603 12:32:28.827288       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.682959   10844 command_runner.go:130] ! I0603 12:32:38.840141       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.682959   10844 command_runner.go:130] ! I0603 12:32:38.840209       1 main.go:227] handling current node
	I0603 05:47:12.683007   10844 command_runner.go:130] ! I0603 12:32:38.840223       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:38.840230       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:38.840630       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:38.840646       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850171       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850276       1 main.go:227] handling current node
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850292       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850299       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850729       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:48.850876       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.683042   10844 command_runner.go:130] ! I0603 12:32:58.856606       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.683571   10844 command_runner.go:130] ! I0603 12:32:58.857034       1 main.go:227] handling current node
	I0603 05:47:12.683664   10844 command_runner.go:130] ! I0603 12:32:58.857296       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:32:58.857510       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:32:58.858637       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:32:58.858677       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864801       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864826       1 main.go:227] handling current node
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864838       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.864844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.865310       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.683840   10844 command_runner.go:130] ! I0603 12:33:08.865474       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872391       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872568       1 main.go:227] handling current node
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872599       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872624       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872804       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:18.872959       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:28.886324       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.684523   10844 command_runner.go:130] ! I0603 12:33:28.886350       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886362       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886368       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886918       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:28.886985       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.893626       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.893899       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.893916       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.894181       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.894556       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:38.894647       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910837       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910878       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910891       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.910896       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.911015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:48.911041       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926167       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926268       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926284       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.926291       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.927007       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:33:58.927131       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937101       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937131       1 main.go:227] handling current node
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937150       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937284       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:08.937292       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685145   10844 command_runner.go:130] ! I0603 12:34:18.943292       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685668   10844 command_runner.go:130] ! I0603 12:34:18.943378       1 main.go:227] handling current node
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943532       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:18.943590       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685727   10844 command_runner.go:130] ! I0603 12:34:28.950687       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685793   10844 command_runner.go:130] ! I0603 12:34:28.950853       1 main.go:227] handling current node
	I0603 05:47:12.685793   10844 command_runner.go:130] ! I0603 12:34:28.950870       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685820   10844 command_runner.go:130] ! I0603 12:34:28.950878       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:28.951068       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:28.951084       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:38.965710       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685841   10844 command_runner.go:130] ! I0603 12:34:38.967355       1 main.go:227] handling current node
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967377       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967388       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967555       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685897   10844 command_runner.go:130] ! I0603 12:34:38.967566       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.975988       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.976117       1 main.go:227] handling current node
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.976134       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.685938   10844 command_runner.go:130] ! I0603 12:34:48.976142       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.685982   10844 command_runner.go:130] ! I0603 12:34:48.976817       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.685982   10844 command_runner.go:130] ! I0603 12:34:48.976852       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686012   10844 command_runner.go:130] ! I0603 12:34:58.991312       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686012   10844 command_runner.go:130] ! I0603 12:34:58.991846       1 main.go:227] handling current node
	I0603 05:47:12.686012   10844 command_runner.go:130] ! I0603 12:34:58.991984       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:34:58.992011       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:34:58.992262       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:34:58.992331       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999119       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999230       1 main.go:227] handling current node
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999369       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999483       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999604       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:08.999616       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007514       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007620       1 main.go:227] handling current node
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007635       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007642       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007957       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:19.007986       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686073   10844 command_runner.go:130] ! I0603 12:35:29.013983       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686619   10844 command_runner.go:130] ! I0603 12:35:29.014066       1 main.go:227] handling current node
	I0603 05:47:12.686619   10844 command_runner.go:130] ! I0603 12:35:29.014081       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:29.014088       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:29.014429       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:29.014444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025261       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025288       1 main.go:227] handling current node
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025300       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686660   10844 command_runner.go:130] ! I0603 12:35:39.025306       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686763   10844 command_runner.go:130] ! I0603 12:35:39.025682       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686763   10844 command_runner.go:130] ! I0603 12:35:39.025828       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686763   10844 command_runner.go:130] ! I0603 12:35:49.038248       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.039013       1 main.go:227] handling current node
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.039143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.039662       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686814   10844 command_runner.go:130] ! I0603 12:35:49.040380       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686869   10844 command_runner.go:130] ! I0603 12:35:49.040438       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686869   10844 command_runner.go:130] ! I0603 12:35:59.052205       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686869   10844 command_runner.go:130] ! I0603 12:35:59.052297       1 main.go:227] handling current node
	I0603 05:47:12.686910   10844 command_runner.go:130] ! I0603 12:35:59.052328       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.686910   10844 command_runner.go:130] ! I0603 12:35:59.052410       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.686910   10844 command_runner.go:130] ! I0603 12:35:59.052577       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.686958   10844 command_runner.go:130] ! I0603 12:35:59.052607       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.686991   10844 command_runner.go:130] ! I0603 12:36:09.059926       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.686991   10844 command_runner.go:130] ! I0603 12:36:09.059974       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.059988       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.059995       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.060515       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:09.060532       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.069521       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.069928       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.070204       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.070309       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.070978       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:19.071168       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084376       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084614       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084689       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.084804       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.085015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:29.085100       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098298       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098419       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098435       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098444       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.098942       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:39.099083       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109724       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109872       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.109894       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.110382       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:49.110466       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.116904       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117061       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117150       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117281       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117621       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:36:59.117713       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.133187       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.133597       1 main.go:227] handling current node
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.133807       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.134149       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.134720       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:09.134902       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:19.141218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687020   10844 command_runner.go:130] ! I0603 12:37:19.141246       1 main.go:227] handling current node
	I0603 05:47:12.687552   10844 command_runner.go:130] ! I0603 12:37:19.141257       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687552   10844 command_runner.go:130] ! I0603 12:37:19.141263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687552   10844 command_runner.go:130] ! I0603 12:37:19.141386       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:19.141456       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:29.151018       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:29.151126       1 main.go:227] handling current node
	I0603 05:47:12.687597   10844 command_runner.go:130] ! I0603 12:37:29.151147       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687659   10844 command_runner.go:130] ! I0603 12:37:29.151156       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687659   10844 command_runner.go:130] ! I0603 12:37:29.151810       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687697   10844 command_runner.go:130] ! I0603 12:37:29.152019       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687739   10844 command_runner.go:130] ! I0603 12:37:39.165415       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.165510       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.165524       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.165530       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.166173       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:39.166270       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181247       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181371       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181387       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181412       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.181852       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:49.182176       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189418       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189528       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189544       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.189552       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.190394       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:37:59.190480       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197274       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197415       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197432       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197440       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197851       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:09.197933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204632       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204793       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204826       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.204835       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.205144       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:19.205251       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213406       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213503       1 main.go:227] handling current node
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213518       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213524       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.687776   10844 command_runner.go:130] ! I0603 12:38:29.213644       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:29.213655       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:39.229128       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:39.229187       1 main.go:227] handling current node
	I0603 05:47:12.688315   10844 command_runner.go:130] ! I0603 12:38:39.229199       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.688392   10844 command_runner.go:130] ! I0603 12:38:39.229205       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.688392   10844 command_runner.go:130] ! I0603 12:38:39.229332       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.688392   10844 command_runner.go:130] ! I0603 12:38:39.229344       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689540   10844 command_runner.go:130] ! I0603 12:38:49.245014       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689576   10844 command_runner.go:130] ! I0603 12:38:49.245069       1 main.go:227] handling current node
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245084       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245091       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245355       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689621   10844 command_runner.go:130] ! I0603 12:38:49.245382       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689812   10844 command_runner.go:130] ! I0603 12:38:59.252267       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.252359       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.252371       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.252376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.260367       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:38:59.260444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270366       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270476       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270490       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270544       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.270869       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:09.271060       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277515       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277615       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277631       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.277638       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.278259       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:19.278516       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287007       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287102       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287117       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287124       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287246       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:29.287329       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293747       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293802       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.293812       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.294185       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:39.294225       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304527       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304629       1 main.go:227] handling current node
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304643       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304651       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.304863       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:49.305107       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.689860   10844 command_runner.go:130] ! I0603 12:39:59.314751       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690440   10844 command_runner.go:130] ! I0603 12:39:59.314846       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.314860       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.314866       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.314992       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:39:59.315004       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321649       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321868       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.321895       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.322451       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:09.322470       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336642       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336845       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336864       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.336872       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.337002       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:19.337011       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350352       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350468       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350484       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350493       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.350956       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:29.351085       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366296       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366357       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366370       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366518       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:39.366548       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371036       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371174       1 main.go:227] handling current node
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371189       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371218       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371340       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:49.371368       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.690471   10844 command_runner.go:130] ! I0603 12:40:59.386603       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387024       1 main.go:227] handling current node
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387122       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387140       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691031   10844 command_runner.go:130] ! I0603 12:40:59.387625       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:40:59.387909       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401524       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401658       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401746       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.401844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.402106       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:09.402238       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408360       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408404       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408417       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408423       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408530       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:19.408541       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414703       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414865       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.414889       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.415393       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.415619       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:29.415702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426331       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426441       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426455       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426462       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426731       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:39.426795       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436724       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436739       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.436745       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.437162       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:49.437250       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:59.449218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:59.449377       1 main.go:227] handling current node
	I0603 05:47:12.691123   10844 command_runner.go:130] ! I0603 12:41:59.449393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691709   10844 command_runner.go:130] ! I0603 12:41:59.449400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:41:59.449801       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:41:59.449916       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464583       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464690       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464705       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.464713       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.465435       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:09.465537       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.473928       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474029       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474044       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474052       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474454       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:19.474552       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480280       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480469       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480606       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.480686       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.481023       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:29.481213       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492462       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492634       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492669       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492711       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.492930       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 05:47:12.691900   10844 command_runner.go:130] ! I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.692862   10844 command_runner.go:130] ! I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.711493   10844 logs.go:123] Gathering logs for kube-proxy [ad08c7b8f3af] ...
	I0603 05:47:12.712455   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad08c7b8f3af"
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:12.744090   10844 command_runner.go:130] ! I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:12.744767   10844 command_runner.go:130] ! I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 05:47:12.745347   10844 command_runner.go:130] ! I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:12.745347   10844 command_runner.go:130] ! I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:12.745347   10844 command_runner.go:130] ! I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:12.745398   10844 command_runner.go:130] ! I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:12.747653   10844 logs.go:123] Gathering logs for kube-controller-manager [cbaa09a85a64] ...
	I0603 05:47:12.747695   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa09a85a64"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:57.870752       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.526588       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.526702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.533907       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.534542       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.535842       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:45:58.536233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.398949       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.399900       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435010       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435076       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.435752       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.494257       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.494484       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.501595       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:12.785851   10844 command_runner.go:130] ! E0603 12:46:02.503053       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.503101       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.506314       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.511488       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.511970       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.516592       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.520190       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.521481       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.521500       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.522419       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.522531       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.522539       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.527263       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.527284       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.528477       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.528534       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:12.785851   10844 command_runner.go:130] ! I0603 12:46:02.528980       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:12.787154   10844 command_runner.go:130] ! I0603 12:46:02.529023       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:12.789296   10844 command_runner.go:130] ! I0603 12:46:02.529029       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:12.789536   10844 command_runner.go:130] ! I0603 12:46:02.532164       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:12.789605   10844 command_runner.go:130] ! I0603 12:46:02.532658       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:12.789605   10844 command_runner.go:130] ! I0603 12:46:02.532787       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:12.789605   10844 command_runner.go:130] ! I0603 12:46:02.537982       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.538156       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.540497       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.545135       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.545508       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.546501       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.548466       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:12.789649   10844 command_runner.go:130] ! I0603 12:46:02.551407       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.551542       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.552105       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.552249       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:12.789778   10844 command_runner.go:130] ! I0603 12:46:02.552280       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:12.789830   10844 command_runner.go:130] ! I0603 12:46:02.552956       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:12.789830   10844 command_runner.go:130] ! I0603 12:46:02.564031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:12.789830   10844 command_runner.go:130] ! I0603 12:46:02.564743       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.565277       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.565424       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.571139       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.571233       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.572399       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:12.789897   10844 command_runner.go:130] ! I0603 12:46:02.572466       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.573181       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.573205       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.574887       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:12.790008   10844 command_runner.go:130] ! I0603 12:46:02.582200       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:12.790083   10844 command_runner.go:130] ! I0603 12:46:02.582364       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:12.790083   10844 command_runner.go:130] ! I0603 12:46:02.582373       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:12.790083   10844 command_runner.go:130] ! I0603 12:46:02.588602       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.790122   10844 command_runner.go:130] ! I0603 12:46:02.591240       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:12.790122   10844 command_runner.go:130] ! I0603 12:46:12.612297       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:12.790122   10844 command_runner.go:130] ! I0603 12:46:12.612483       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:12.790208   10844 command_runner.go:130] ! I0603 12:46:12.613381       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:12.790208   10844 command_runner.go:130] ! I0603 12:46:12.623612       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:12.790208   10844 command_runner.go:130] ! I0603 12:46:12.628478       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.628951       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.629235       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.652905       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.652988       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.653246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673155       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673508       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.673789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.674494       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.674611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.674812       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675397       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675422       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675675       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.675905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676018       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676230       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676428       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676474       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676746       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676879       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.676991       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.677057       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.677159       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.677261       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.679809       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.680265       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.680400       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:12.790250   10844 command_runner.go:130] ! I0603 12:46:12.696376       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:12.790855   10844 command_runner.go:130] ! I0603 12:46:12.697035       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:12.790855   10844 command_runner.go:130] ! I0603 12:46:12.697121       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:12.790855   10844 command_runner.go:130] ! I0603 12:46:12.699870       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.700035       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.700365       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.707376       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:12.790917   10844 command_runner.go:130] ! I0603 12:46:12.708196       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:12.791022   10844 command_runner.go:130] ! I0603 12:46:12.708250       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:12.791022   10844 command_runner.go:130] ! I0603 12:46:12.715601       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:12.791022   10844 command_runner.go:130] ! I0603 12:46:12.716125       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.716429       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.725280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.725365       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.726123       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.734528       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.734935       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.735117       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.737491       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.737773       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.737858       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743270       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743591       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743640       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.743648       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748185       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748266       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748498       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748532       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.748553       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749140       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749625       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749663       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.749897       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.750105       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.750568       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.753301       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.753662       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:12.791062   10844 command_runner.go:130] ! I0603 12:46:12.753804       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.754382       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.754576       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.757083       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.757524       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.758174       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:12.791643   10844 command_runner.go:130] ! I0603 12:46:12.760247       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.760686       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.760938       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.772698       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.791740   10844 command_runner.go:130] ! I0603 12:46:12.772922       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.774148       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:12.791813   10844 command_runner.go:130] ! E0603 12:46:12.775996       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.776034       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.779294       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.791813   10844 command_runner.go:130] ! I0603 12:46:12.779452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.780268       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783043       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783634       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783847       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:12.791896   10844 command_runner.go:130] ! I0603 12:46:12.783962       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.792655       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.801373       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.817303       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.821609       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:12.791970   10844 command_runner.go:130] ! I0603 12:46:12.829238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.792050   10844 command_runner.go:130] ! I0603 12:46:12.832397       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:12.792050   10844 command_runner.go:130] ! I0603 12:46:12.832809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.792193   10844 command_runner.go:130] ! I0603 12:46:12.833093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:12.792193   10844 command_runner.go:130] ! I0603 12:46:12.833264       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:12.792193   10844 command_runner.go:130] ! I0603 12:46:12.833561       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.833878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.835226       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.840542       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.846790       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.849319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:12.792287   10844 command_runner.go:130] ! I0603 12:46:12.849497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.851129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.851147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.852109       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.854406       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:12.792377   10844 command_runner.go:130] ! I0603 12:46:12.854923       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.867259       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.873525       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.874696       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.876061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.880612       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:12.792455   10844 command_runner.go:130] ! I0603 12:46:12.880650       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.884270       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.896673       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.897786       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.909588       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.922202       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:12.792543   10844 command_runner.go:130] ! I0603 12:46:12.923485       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.923685       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924158       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924516       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924851       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.924952       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.928113       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.929667       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:12.792617   10844 command_runner.go:130] ! I0603 12:46:12.959523       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:12.793168   10844 command_runner.go:130] ! I0603 12:46:12.963250       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:12.793202   10844 command_runner.go:130] ! I0603 12:46:13.029808       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:12.793202   10844 command_runner.go:130] ! I0603 12:46:13.030293       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:12.793243   10844 command_runner.go:130] ! I0603 12:46:13.038277       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:12.793243   10844 command_runner.go:130] ! I0603 12:46:13.044424       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:12.793482   10844 command_runner.go:130] ! I0603 12:46:13.064118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:12.793482   10844 command_runner.go:130] ! I0603 12:46:13.064519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:12.793482   10844 command_runner.go:130] ! I0603 12:46:13.064657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.064984       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.077763       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.083477       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.093778       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:12.793558   10844 command_runner.go:130] ! I0603 12:46:13.100897       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:12.793632   10844 command_runner.go:130] ! I0603 12:46:13.133780       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:12.793632   10844 command_runner.go:130] ! I0603 12:46:13.164944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.004317ms"
	I0603 05:47:12.793632   10844 command_runner.go:130] ! I0603 12:46:13.168328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.004µs"
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.172600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="212.304157ms"
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.173022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.001µs"
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.502035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.793708   10844 command_runner.go:130] ! I0603 12:46:13.535943       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 05:47:12.793777   10844 command_runner.go:130] ! I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 05:47:12.793858   10844 command_runner.go:130] ! I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 05:47:12.793858   10844 command_runner.go:130] ! I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 05:47:12.793858   10844 command_runner.go:130] ! I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 05:47:12.793933   10844 command_runner.go:130] ! I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 05:47:12.814157   10844 logs.go:123] Gathering logs for kube-scheduler [f39be6db7a1f] ...
	I0603 05:47:12.814157   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f39be6db7a1f"
	I0603 05:47:12.843221   10844 command_runner.go:130] ! I0603 12:22:59.604855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.885974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.886217       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.886249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.886344       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.957357       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.957471       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962196       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962588       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:12.843269   10844 command_runner.go:130] ! I0603 12:23:00.962719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.975786       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.976030       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.976627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.976720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.977093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.977211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! W0603 12:23:00.977871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843269   10844 command_runner.go:130] ! E0603 12:23:00.978108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.843852   10844 command_runner.go:130] ! W0603 12:23:00.978352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.843852   10844 command_runner.go:130] ! E0603 12:23:00.978554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.843852   10844 command_runner.go:130] ! W0603 12:23:00.978915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.843955   10844 command_runner.go:130] ! E0603 12:23:00.979166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.843955   10844 command_runner.go:130] ! W0603 12:23:00.979907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844150   10844 command_runner.go:130] ! E0603 12:23:00.980156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844221   10844 command_runner.go:130] ! W0603 12:23:00.980358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844302   10844 command_runner.go:130] ! E0603 12:23:00.980393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.980479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.980561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.980991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.981244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.981380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.981529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.981800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.981883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.981956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.982200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.982090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.982650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:00.982102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:00.982927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.795531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:01.795655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.838399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:01.838478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.861969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! E0603 12:23:01.862351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:12.844329   10844 command_runner.go:130] ! W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844909   10844 command_runner.go:130] ! E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:12.844909   10844 command_runner.go:130] ! W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.844909   10844 command_runner.go:130] ! E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:12.845012   10844 command_runner.go:130] ! W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845080   10844 command_runner.go:130] ! E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845131   10844 command_runner.go:130] ! W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.845160   10844 command_runner.go:130] ! E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:12.845160   10844 command_runner.go:130] ! W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845160   10844 command_runner.go:130] ! E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845241   10844 command_runner.go:130] ! W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845317   10844 command_runner.go:130] ! E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845317   10844 command_runner.go:130] ! W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.845317   10844 command_runner.go:130] ! E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:12.845394   10844 command_runner.go:130] ! W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.845469   10844 command_runner.go:130] ! E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:12.845469   10844 command_runner.go:130] ! W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845547   10844 command_runner.go:130] ! E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:12.845587   10844 command_runner.go:130] ! W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.845602   10844 command_runner.go:130] ! E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:12.845602   10844 command_runner.go:130] ! W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:12.845681   10844 command_runner.go:130] ! I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:12.845766   10844 command_runner.go:130] ! E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 05:47:12.859170   10844 logs.go:123] Gathering logs for kindnet [3a08a76e2a79] ...
	I0603 05:47:12.859170   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a08a76e2a79"
	I0603 05:47:12.886501   10844 command_runner.go:130] ! I0603 12:46:03.050827       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 05:47:12.886501   10844 command_runner.go:130] ! I0603 12:46:03.051229       1 main.go:107] hostIP = 172.17.95.88
	I0603 05:47:12.886501   10844 command_runner.go:130] ! podIP = 172.17.95.88
	I0603 05:47:12.887520   10844 command_runner.go:130] ! I0603 12:46:03.051377       1 main.go:116] setting mtu 1500 for CNI 
	I0603 05:47:12.887520   10844 command_runner.go:130] ! I0603 12:46:03.051397       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 05:47:12.887557   10844 command_runner.go:130] ! I0603 12:46:03.051417       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 05:47:12.887583   10844 command_runner.go:130] ! I0603 12:46:33.483366       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 05:47:12.887583   10844 command_runner.go:130] ! I0603 12:46:33.505262       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.505362       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506144       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.94.201 Flags: [] Table: 0} 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506651       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506661       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:33.506765       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512187       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512270       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512283       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512290       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512906       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:43.512944       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529047       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529290       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529365       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529466       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.529947       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:46:53.530023       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545370       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545467       1 main.go:227] handling current node
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545481       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545487       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.545994       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:12.887615   10844 command_runner.go:130] ! I0603 12:47:03.546064       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:12.890880   10844 logs.go:123] Gathering logs for dmesg ...
	I0603 05:47:12.891412   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 05:47:12.916815   10844 command_runner.go:130] > [Jun 3 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.129332] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.024453] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.058085] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.021687] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 05:47:12.916944   10844 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 05:47:12.916944   10844 command_runner.go:130] > [Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	I0603 05:47:12.916944   10844 command_runner.go:130] > [  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	I0603 05:47:12.919126   10844 logs.go:123] Gathering logs for kube-apiserver [a9b10f4d479a] ...
	I0603 05:47:12.919126   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9b10f4d479a"
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:57.403757       1 options.go:221] external host was not specified, using 172.17.95.88
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:57.406924       1 server.go:148] Version: v1.30.1
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:57.407254       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.053920       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.058845       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.058955       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.059338       1 instance.go:299] Using reconciler: lease
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.060201       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:58.875148       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:58.875563       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.142148       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.142832       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.377455       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.573170       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.586634       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.586771       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.586784       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.588425       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.588531       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.590497       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.591820       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.591914       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.591924       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.594253       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.594382       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.595963       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.596105       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.596117       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! I0603 12:45:59.597347       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.597459       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.957089   10844 command_runner.go:130] ! W0603 12:45:59.597610       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958195   10844 command_runner.go:130] ! I0603 12:45:59.598635       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 05:47:12.958195   10844 command_runner.go:130] ! I0603 12:45:59.601013       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 05:47:12.958195   10844 command_runner.go:130] ! W0603 12:45:59.601125       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958195   10844 command_runner.go:130] ! W0603 12:45:59.601136       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958195   10844 command_runner.go:130] ! I0603 12:45:59.601685       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 05:47:12.958195   10844 command_runner.go:130] ! W0603 12:45:59.601835       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958304   10844 command_runner.go:130] ! W0603 12:45:59.601851       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958304   10844 command_runner.go:130] ! I0603 12:45:59.602906       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 05:47:12.958304   10844 command_runner.go:130] ! W0603 12:45:59.603027       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 05:47:12.958356   10844 command_runner.go:130] ! I0603 12:45:59.605451       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 05:47:12.958356   10844 command_runner.go:130] ! W0603 12:45:59.605590       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958356   10844 command_runner.go:130] ! W0603 12:45:59.605603       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! I0603 12:45:59.606823       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.607057       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.607073       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! I0603 12:45:59.610997       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.611141       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958398   10844 command_runner.go:130] ! W0603 12:45:59.611153       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958493   10844 command_runner.go:130] ! I0603 12:45:59.615262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 05:47:12.958493   10844 command_runner.go:130] ! I0603 12:45:59.618444       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 05:47:12.958493   10844 command_runner.go:130] ! W0603 12:45:59.618592       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 05:47:12.958493   10844 command_runner.go:130] ! W0603 12:45:59.618802       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958493   10844 command_runner.go:130] ! I0603 12:45:59.633959       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 05:47:12.958568   10844 command_runner.go:130] ! W0603 12:45:59.634179       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 05:47:12.958568   10844 command_runner.go:130] ! W0603 12:45:59.634387       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 05:47:12.958568   10844 command_runner.go:130] ! I0603 12:45:59.641016       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 05:47:12.958568   10844 command_runner.go:130] ! W0603 12:45:59.641203       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958642   10844 command_runner.go:130] ! W0603 12:45:59.641390       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:12.958642   10844 command_runner.go:130] ! I0603 12:45:59.643262       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 05:47:12.958642   10844 command_runner.go:130] ! W0603 12:45:59.643611       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958642   10844 command_runner.go:130] ! I0603 12:45:59.665282       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 05:47:12.958715   10844 command_runner.go:130] ! W0603 12:45:59.665339       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:12.958715   10844 command_runner.go:130] ! I0603 12:46:00.321072       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 05:47:12.958715   10844 command_runner.go:130] ! I0603 12:46:00.321338       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 05:47:12.958715   10844 command_runner.go:130] ! I0603 12:46:00.321510       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.958796   10844 command_runner.go:130] ! I0603 12:46:00.321684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:12.958796   10844 command_runner.go:130] ! I0603 12:46:00.322441       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 05:47:12.958842   10844 command_runner.go:130] ! I0603 12:46:00.324839       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 05:47:12.958842   10844 command_runner.go:130] ! I0603 12:46:00.324963       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.325383       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331772       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331819       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331950       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 05:47:12.958881   10844 command_runner.go:130] ! I0603 12:46:00.331975       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.331996       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332390       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332464       1 controller.go:139] Starting OpenAPI controller
	I0603 05:47:12.958967   10844 command_runner.go:130] ! I0603 12:46:00.332488       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332501       1 naming_controller.go:291] Starting NamingConditionController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332512       1 establishing_controller.go:76] Starting EstablishingController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332528       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 05:47:12.959052   10844 command_runner.go:130] ! I0603 12:46:00.332550       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 05:47:12.959137   10844 command_runner.go:130] ! I0603 12:46:00.321340       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.959137   10844 command_runner.go:130] ! I0603 12:46:00.325911       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 05:47:12.959165   10844 command_runner.go:130] ! I0603 12:46:00.348350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.348672       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.325922       1 available_controller.go:423] Starting AvailableConditionController
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.350192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.325939       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.325949       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.368845       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.368878       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.451943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 05:47:12.959193   10844 command_runner.go:130] ! W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 05:47:12.959193   10844 command_runner.go:130] ! W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	I0603 05:47:12.968828   10844 logs.go:123] Gathering logs for etcd [ef3c01484867] ...
	I0603 05:47:12.968828   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3c01484867"
	I0603 05:47:12.998550   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.861568Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.863054Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.95.88:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.95.88:2380","--initial-cluster=multinode-316400=https://172.17.95.88:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.95.88:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.95.88:2380","--name=multinode-316400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.86357Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.864546Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.866457Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.95.88:2380"]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.867148Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.884169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"]}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.885995Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-316400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 05:47:12.998577   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.912835Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"25.475134ms"}
	I0603 05:47:12.999173   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.947133Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.990656Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","commit-index":1995}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=()"}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became follower at term 2"}
	I0603 05:47:12.999238   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2227694153984668 [peers: [], term: 2, commit: 1995, applied: 0, lastindex: 1995, lastterm: 2]"}
	I0603 05:47:12.999350   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:57.005826Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 05:47:12.999350   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.01104Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0603 05:47:12.999389   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.018364Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0603 05:47:12.999477   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.030883Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 05:47:12.999527   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042399Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2227694153984668","timeout":"7s"}
	I0603 05:47:12.999527   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042946Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2227694153984668"}
	I0603 05:47:12.999564   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.043072Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2227694153984668","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 05:47:12.999564   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.046821Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 05:47:12.999644   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047797Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 05:47:12.999644   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 05:47:12.999688   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 05:47:12.999688   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 05:47:12.999727   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	I0603 05:47:13.007028   10844 logs.go:123] Gathering logs for coredns [8280b3904678] ...
	I0603 05:47:13.007176   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8280b3904678"
	I0603 05:47:13.040482   10844 command_runner.go:130] > .:53
	I0603 05:47:13.040561   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:13.040612   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:13.040612   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:13.040612   10844 command_runner.go:130] > [INFO] 127.0.0.1:42160 - 49231 "HINFO IN 7758649785632377755.6167658315586765337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046714522s
	I0603 05:47:13.040663   10844 command_runner.go:130] > [INFO] 10.244.1.2:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279598s
	I0603 05:47:13.040663   10844 command_runner.go:130] > [INFO] 10.244.1.2:58454 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208411566s
	I0603 05:47:13.040696   10844 command_runner.go:130] > [INFO] 10.244.1.2:41741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.13626297s
	I0603 05:47:13.040696   10844 command_runner.go:130] > [INFO] 10.244.1.2:34878 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.105138942s
	I0603 05:47:13.040740   10844 command_runner.go:130] > [INFO] 10.244.0.3:55537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268797s
	I0603 05:47:13.040740   10844 command_runner.go:130] > [INFO] 10.244.0.3:46426 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000881s
	I0603 05:47:13.040773   10844 command_runner.go:130] > [INFO] 10.244.0.3:52879 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174998s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:43420 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000100699s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:58392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115599s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:44885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024455563s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:42255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000337996s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:41386 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245097s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:55181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012426179s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:35256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164099s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:57960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110199s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 05:47:13.040804   10844 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 05:47:13.044558   10844 logs.go:123] Gathering logs for kubelet ...
	I0603 05:47:13.044589   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 05:47:13.077930   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825136    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825207    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.826137    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: E0603 12:45:50.827240    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:13.078046   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552269    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552416    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552941    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: E0603 12:45:51.553003    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711442    1519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711544    1519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711817    1519 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.716147    1519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.748912    1519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.771826    1519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.772049    1519 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773407    1519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773557    1519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-316400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774457    1519 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774557    1519 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.775200    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778084    1519 kubelet.go:400] "Attempting to sync node with API server"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778299    1519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 05:47:13.078136   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778455    1519 kubelet.go:312] "Adding apiserver pod source"
	I0603 05:47:13.078742   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.782054    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.078813   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.782432    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.078813   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.785611    1519 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 05:47:13.078882   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.790640    1519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 05:47:13.078909   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.793090    1519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 05:47:13.078937   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.794605    1519 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 05:47:13.078969   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.796156    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.078969   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.796271    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.079021   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.797002    1519 server.go:1264] "Started kubelet"
	I0603 05:47:13.079071   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.798266    1519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 05:47:13.079071   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.801861    1519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 05:47:13.079140   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.802334    1519 server.go:455] "Adding debug handlers to kubelet server"
	I0603 05:47:13.079217   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.803283    1519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 05:47:13.079297   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.803500    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:13.079328   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.818343    1519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 05:47:13.079328   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.844408    1519 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 05:47:13.079364   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.846586    1519 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 05:47:13.079408   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859495    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="200ms"
	I0603 05:47:13.079430   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.859675    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.079474   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859801    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860191    1519 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860329    1519 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860344    1519 factory.go:221] Registration of the systemd container factory successfully
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898244    1519 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898480    1519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898596    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899321    1519 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899417    1519 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899447    1519 policy_none.go:49] "None policy: Start"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.900544    1519 reconciler.go:26] "Reconciler: start to sync state"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907485    1519 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907527    1519 state_mem.go:35] "Initializing new in-memory state store"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.908237    1519 state_mem.go:75] "Updated machine memory state"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.913835    1519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914035    1519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914854    1519 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.921784    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.927630    1519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-316400\" not found"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932254    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932281    1519 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932300    1519 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 05:47:13.079493   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.935092    1519 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0603 05:47:13.080077   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.940949    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.080077   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.941116    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.080162   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.948643    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.080162   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.949875    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.080213   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.957193    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:13.080213   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:13.080286   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:13.080286   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:13.080286   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:13.080320   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.035350    1519 topology_manager.go:215] "Topology Admit Handler" podUID="29e4294fa112526de08d5737962f6330" podNamespace="kube-system" podName="kube-apiserver-multinode-316400"
	I0603 05:47:13.080371   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.036439    1519 topology_manager.go:215] "Topology Admit Handler" podUID="53c1415900cfae2b2544e26360f8c9e2" podNamespace="kube-system" podName="kube-controller-manager-multinode-316400"
	I0603 05:47:13.080423   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.037279    1519 topology_manager.go:215] "Topology Admit Handler" podUID="392dbbcc275890dd2b6fadbfc5aaee27" podNamespace="kube-system" podName="kube-scheduler-multinode-316400"
	I0603 05:47:13.080445   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.040156    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a77247d80dfdd462b8863b85ab8ad4bb" podNamespace="kube-system" podName="etcd-multinode-316400"
	I0603 05:47:13.080445   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041355    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf22fe66615444841b76ea00858c2d191b3808baedd9bc080bc40a07e173120c"
	I0603 05:47:13.080492   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041413    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b8b906c7ece4b6d777a07a0cb2203eff03efdfae414479586ee928dfd93a0f"
	I0603 05:47:13.080530   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041426    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8fbb688dfe331c1f384bb60f2e3169f09a613ebbfb33a15f502f1d3e605b1"
	I0603 05:47:13.080530   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041486    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f0d5d979f878809d344310dbe1eff0bad9db5a6522da02c87fecce5e5aeee0"
	I0603 05:47:13.080572   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.047918    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.063032    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="400ms"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.063221    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24225992b633386b5c5d178b106212b6c942a19a6f436ce076aaa359c121477"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.079235    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.093321    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.105962    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106038    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-certs\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106081    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-ca-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106112    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-ca-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106140    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-k8s-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106216    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392dbbcc275890dd2b6fadbfc5aaee27-kubeconfig\") pod \"kube-scheduler-multinode-316400\" (UID: \"392dbbcc275890dd2b6fadbfc5aaee27\") " pod="kube-system/kube-scheduler-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106252    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-data\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106274    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-k8s-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106301    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.080601   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106335    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-flexvolume-dir\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.081128   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-kubeconfig\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:13.081174   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.108700    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f366fa802e02ad1c75f843781b4cf6b39c2e71e08ec4fb65114ebe9cbf4901"
	I0603 05:47:13.081230   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.152637    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081270   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.154286    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.081304   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.473402    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="800ms"
	I0603 05:47:13.081304   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.556260    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081344   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.558340    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.081344   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.691400    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081344   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.691528    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.943127    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.943173    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.142169    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.150065    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.247409    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.247587    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.250356    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.250413    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.274392    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="1.6s"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.360120    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.361915    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.861220    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:45:57 multinode-316400 kubelet[1519]: I0603 12:45:57.964214    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604617    1519 kubelet_node_status.go:112] "Node was previously registered" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604775    1519 kubelet_node_status.go:76] "Successfully registered node" node="multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.606910    1519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.607771    1519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.608805    1519 setters.go:580] "Node became not ready" node="multinode-316400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T12:46:00Z","lastTransitionTime":"2024-06-03T12:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.691329    1519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-316400\" already exists" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.081431   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.791033    1519 apiserver.go:52] "Watching apiserver"
	I0603 05:47:13.081986   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798319    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a3523f27-9775-4c1f-812f-a667faa1bace" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4hrc6"
	I0603 05:47:13.082104   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798930    1519 topology_manager.go:215] "Topology Admit Handler" podUID="6815ff24-537b-42f3-b8ee-4c3e13be89f7" podNamespace="kube-system" podName="kindnet-4hpsl"
	I0603 05:47:13.082166   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800209    1519 topology_manager.go:215] "Topology Admit Handler" podUID="60c8f253-7e07-4f56-b1f2-e0032ac6a8ce" podNamespace="kube-system" podName="kube-proxy-ks64x"
	I0603 05:47:13.082210   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800471    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa" podNamespace="kube-system" podName="storage-provisioner"
	I0603 05:47:13.082250   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800813    1519 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	I0603 05:47:13.082285   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.801153    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.082285   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.801692    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-316400" podUID="5a3b396d-1240-4c67-b2f5-e5664e068bfe"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.802378    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.833818    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-316400"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.848055    1519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.920366    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-cni-cfg\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923685    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-lib-modules\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923879    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-xtables-lock\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924084    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-xtables-lock\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924331    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbd73e44-9a7e-4b5f-93e5-d1621c837baa-tmp\") pod \"storage-provisioner\" (UID: \"bbd73e44-9a7e-4b5f-93e5-d1621c837baa\") " pod="kube-system/storage-provisioner"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924536    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-lib-modules\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.924884    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.925133    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.425053064 +0000 UTC m=+6.818668510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.947864    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171c5f025e4267e9949ddac2f1863980" path="/var/lib/kubelet/pods/171c5f025e4267e9949ddac2f1863980/volumes"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.949521    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79ce6c8ebbce53597babbe73b1962c9" path="/var/lib/kubelet/pods/b79ce6c8ebbce53597babbe73b1962c9/volumes"
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.959965    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.082325   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960012    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083014   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960141    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.460099085 +0000 UTC m=+6.853714631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083124   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.984966    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-316400" podStartSLOduration=0.984946212 podStartE2EDuration="984.946212ms" podCreationTimestamp="2024-06-03 12:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:00.911653941 +0000 UTC m=+6.305269487" watchObservedRunningTime="2024-06-03 12:46:00.984946212 +0000 UTC m=+6.378561658"
	I0603 05:47:13.083124   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430112    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.083215   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430199    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.430180493 +0000 UTC m=+7.823795939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.083254   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532174    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083254   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532233    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532300    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.532282929 +0000 UTC m=+7.925898375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: I0603 12:46:01.863329    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.165874    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.352473    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.353470    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-316400" podUID="0cdcee20-9dca-4eca-b92f-a7214368dd5e"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.376913    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442116    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442214    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.442196268 +0000 UTC m=+9.835811814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543119    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543210    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543279    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.543260694 +0000 UTC m=+9.936876140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935003    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935334    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:03 multinode-316400 kubelet[1519]: I0603 12:46:03.466467    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-316400" podStartSLOduration=1.4664454550000001 podStartE2EDuration="1.466445455s" podCreationTimestamp="2024-06-03 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:03.412988665 +0000 UTC m=+8.806604211" watchObservedRunningTime="2024-06-03 12:46:03.466445455 +0000 UTC m=+8.860061001"
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461035    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461144    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.461126571 +0000 UTC m=+13.854742017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562140    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083342   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562216    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083926   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562368    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.562318298 +0000 UTC m=+13.955933744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.083972   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.917749    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.083972   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935276    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084093   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935939    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084133   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084185   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935856    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084225   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497589    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.084262   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497705    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.497687292 +0000 UTC m=+21.891302738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.084301   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599269    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.084335   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599402    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.084408   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599472    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.599454365 +0000 UTC m=+21.993069911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.084446   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933000    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084480   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933553    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084553   10844 command_runner.go:130] > Jun 03 12:46:09 multinode-316400 kubelet[1519]: E0603 12:46:09.919522    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.084553   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.933394    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084553   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.934072    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084737   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.933530    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084840   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.934829    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.084840   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.920634    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.084892   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.933278    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.084968   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.934086    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.577469    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.578411    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.578339881 +0000 UTC m=+37.971955427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.677992    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678127    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678205    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.678184952 +0000 UTC m=+38.071800498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933065    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933791    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.934362    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.935128    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:19 multinode-316400 kubelet[1519]: E0603 12:46:19.922666    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.934372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.935099    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934047    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934767    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.924197    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.085016   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.933388    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085601   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.934120    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085682   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.934350    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085682   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.935369    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085784   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.934504    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085824   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.935634    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085883   10844 command_runner.go:130] > Jun 03 12:46:29 multinode-316400 kubelet[1519]: E0603 12:46:29.925755    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:13.085883   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.933950    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.937812    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624555    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624639    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.624619316 +0000 UTC m=+70.018234762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726444    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726516    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726576    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.726559662 +0000 UTC m=+70.120175108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.933519    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.934365    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.841289    1519 scope.go:117] "RemoveContainer" containerID="f3d3a474bbe63a5e0e163d5c7d92c13e3e09cac96cc090c7077e648e1f08c5c7"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.842261    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: E0603 12:46:33.842518    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbd73e44-9a7e-4b5f-93e5-d1621c837baa)\"" pod="kube-system/storage-provisioner" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:13.085922   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:13.086513   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:13.086513   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:13.086513   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	I0603 05:47:13.086579   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	I0603 05:47:13.086579   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	I0603 05:47:13.140022   10844 logs.go:123] Gathering logs for describe nodes ...
	I0603 05:47:13.140022   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 05:47:13.374314   10844 command_runner.go:130] > Name:               multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] > Roles:              control-plane
	I0603 05:47:13.374314   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 05:47:13.374314   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:13.374314   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:13.374314   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	I0603 05:47:13.374314   10844 command_runner.go:130] > Taints:             <none>
	I0603 05:47:13.374314   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:13.374314   10844 command_runner.go:130] > Lease:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:13.374314   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:47:12 +0000
	I0603 05:47:13.374314   10844 command_runner.go:130] > Conditions:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 05:47:13.374314   10844 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 05:47:13.374314   10844 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 05:47:13.374314   10844 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	I0603 05:47:13.374314   10844 command_runner.go:130] > Addresses:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   InternalIP:  172.17.95.88
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Hostname:    multinode-316400
	I0603 05:47:13.374314   10844 command_runner.go:130] > Capacity:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.374314   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.374314   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.374314   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.374314   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.374314   10844 command_runner.go:130] > System Info:
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Machine ID:                 babca97119de4d6fa999cc452dbf962d
	I0603 05:47:13.374314   10844 command_runner.go:130] >   System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:13.374314   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:13.374314   10844 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 05:47:13.374314   10844 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 05:47:13.374314   10844 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 05:47:13.374314   10844 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:13.374314   10844 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 05:47:13.374314   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-pm79t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 etcd-multinode-316400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 kindnet-4hpsl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-316400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	I0603 05:47:13.374314   10844 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-316400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 05:47:13.375275   10844 command_runner.go:130] >   kube-system                 kube-proxy-ks64x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0603 05:47:13.375275   10844 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-316400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 05:47:13.375275   10844 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0603 05:47:13.375275   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:13.375275   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   Resource           Requests     Limits
	I0603 05:47:13.375275   10844 command_runner.go:130] >   --------           --------     ------
	I0603 05:47:13.375275   10844 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 05:47:13.375275   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 05:47:13.375275   10844 command_runner.go:130] > Events:
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:13.375412   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:13.375412   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.375521   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:13.375521   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.375521   10844 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 05:47:13.375567   10844 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:13.375593   10844 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-316400 status is now: NodeReady
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.375613   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:13.375682   10844 command_runner.go:130] > Name:               multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:13.375682   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:13.375682   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:13.375682   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	I0603 05:47:13.375682   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:13.375682   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:13.375682   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:13.375682   10844 command_runner.go:130] > Lease:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:13.375682   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:47 +0000
	I0603 05:47:13.375682   10844 command_runner.go:130] > Conditions:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:13.375682   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:13.375682   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.375682   10844 command_runner.go:130] > Addresses:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   InternalIP:  172.17.94.201
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Hostname:    multinode-316400-m02
	I0603 05:47:13.375682   10844 command_runner.go:130] > Capacity:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.375682   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.375682   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.375682   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.375682   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.375682   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.375682   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.375682   10844 command_runner.go:130] > System Info:
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	I0603 05:47:13.375682   10844 command_runner.go:130] >   System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	I0603 05:47:13.375682   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:13.375682   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:13.376257   10844 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 05:47:13.376257   10844 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 05:47:13.376257   10844 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 05:47:13.376257   10844 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:13.376257   10844 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 05:47:13.376447   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-hmxqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:13.376447   10844 command_runner.go:130] >   kube-system                 kindnet-789v5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0603 05:47:13.376447   10844 command_runner.go:130] >   kube-system                 kube-proxy-z26hc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:13.376447   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:13.376447   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:13.376447   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:13.376447   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:13.376447   10844 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 05:47:13.376545   10844 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 05:47:13.376545   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:13.376545   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:13.376545   10844 command_runner.go:130] > Events:
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:13.376545   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	I0603 05:47:13.376545   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	I0603 05:47:13.376668   10844 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:13.376743   10844 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-316400-m02 status is now: NodeNotReady
	I0603 05:47:13.376743   10844 command_runner.go:130] > Name:               multinode-316400-m03
	I0603 05:47:13.376743   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:13.376774   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m03
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:13.376774   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:13.376928   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:13.376928   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	I0603 05:47:13.376971   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:13.376971   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:13.376971   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:13.377012   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:13.377012   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	I0603 05:47:13.377012   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:13.377050   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:13.377050   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:13.377050   10844 command_runner.go:130] > Lease:
	I0603 05:47:13.377050   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m03
	I0603 05:47:13.377050   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:13.377116   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	I0603 05:47:13.377116   10844 command_runner.go:130] > Conditions:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:13.377116   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:13.377116   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:13.377116   10844 command_runner.go:130] > Addresses:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   InternalIP:  172.17.87.60
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Hostname:    multinode-316400-m03
	I0603 05:47:13.377116   10844 command_runner.go:130] > Capacity:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.377116   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.377116   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.377116   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:13.377116   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:13.377116   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:13.377116   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:13.377116   10844 command_runner.go:130] > System Info:
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	I0603 05:47:13.377116   10844 command_runner.go:130] >   System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:13.377116   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:13.377116   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:13.377716   10844 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 05:47:13.377716   10844 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 05:47:13.377716   10844 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 05:47:13.377716   10844 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:13.377716   10844 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 05:47:13.377716   10844 command_runner.go:130] >   kube-system                 kindnet-2g66r       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0603 05:47:13.377716   10844 command_runner.go:130] >   kube-system                 kube-proxy-dl97g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0603 05:47:13.377716   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:13.377904   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:13.377904   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:13.377904   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:13.377904   10844 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 05:47:13.377904   10844 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 05:47:13.377904   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:13.377987   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:13.377987   10844 command_runner.go:130] > Events:
	I0603 05:47:13.377987   10844 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 05:47:13.378062   10844 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 05:47:13.378062   10844 command_runner.go:130] >   Normal  Starting                 5m42s                  kube-proxy       
	I0603 05:47:13.378062   10844 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 05:47:13.378169   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.378239   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:13.378239   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.378332   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:13.378332   10844 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:13.378332   10844 command_runner.go:130] >   Normal  Starting                 5m46s                  kubelet          Starting kubelet.
	I0603 05:47:13.378421   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:13.378421   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:13.378497   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:13.378497   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:13.378497   10844 command_runner.go:130] >   Normal  RegisteredNode           5m45s                  node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:13.378570   10844 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:13.378570   10844 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	I0603 05:47:13.378642   10844 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:13.390046   10844 logs.go:123] Gathering logs for kube-scheduler [334bb0174b55] ...
	I0603 05:47:13.390046   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 334bb0174b55"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:13.415295   10844 command_runner.go:130] ! W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:13.415295   10844 command_runner.go:130] ! I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:15.927611   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:47:15.934787   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 200:
	ok
	I0603 05:47:15.935469   10844 round_trippers.go:463] GET https://172.17.95.88:8443/version
	I0603 05:47:15.935572   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:15.935572   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:15.935643   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:15.937252   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:47:15.937252   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:15.937252   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Content-Length: 263
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:15 GMT
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Audit-Id: 13a9976f-4eba-4aa5-b8ce-cd9a75caa81d
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:15.937252   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:15.937252   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:15.937252   10844 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 05:47:15.937252   10844 api_server.go:141] control plane version: v1.30.1
	I0603 05:47:15.937252   10844 api_server.go:131] duration metric: took 3.814714s to wait for apiserver health ...
	I0603 05:47:15.937252   10844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 05:47:15.946206   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 05:47:15.986773   10844 command_runner.go:130] > a9b10f4d479a
	I0603 05:47:15.987010   10844 logs.go:276] 1 containers: [a9b10f4d479a]
	I0603 05:47:15.997273   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 05:47:16.022845   10844 command_runner.go:130] > ef3c01484867
	I0603 05:47:16.022845   10844 logs.go:276] 1 containers: [ef3c01484867]
	I0603 05:47:16.031673   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 05:47:16.057686   10844 command_runner.go:130] > 4241e2ff2dfe
	I0603 05:47:16.057716   10844 command_runner.go:130] > 8280b3904678
	I0603 05:47:16.057716   10844 logs.go:276] 2 containers: [4241e2ff2dfe 8280b3904678]
	I0603 05:47:16.066444   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 05:47:16.088421   10844 command_runner.go:130] > 334bb0174b55
	I0603 05:47:16.088421   10844 command_runner.go:130] > f39be6db7a1f
	I0603 05:47:16.089495   10844 logs.go:276] 2 containers: [334bb0174b55 f39be6db7a1f]
	I0603 05:47:16.098269   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 05:47:16.120854   10844 command_runner.go:130] > 09616a16042d
	I0603 05:47:16.120854   10844 command_runner.go:130] > ad08c7b8f3af
	I0603 05:47:16.120962   10844 logs.go:276] 2 containers: [09616a16042d ad08c7b8f3af]
	I0603 05:47:16.131692   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 05:47:16.155491   10844 command_runner.go:130] > cbaa09a85a64
	I0603 05:47:16.155491   10844 command_runner.go:130] > 3d7dc29a5791
	I0603 05:47:16.155491   10844 logs.go:276] 2 containers: [cbaa09a85a64 3d7dc29a5791]
	I0603 05:47:16.166122   10844 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 05:47:16.186307   10844 command_runner.go:130] > 3a08a76e2a79
	I0603 05:47:16.186307   10844 command_runner.go:130] > a00a9dc2a937
	I0603 05:47:16.186307   10844 logs.go:276] 2 containers: [3a08a76e2a79 a00a9dc2a937]
	I0603 05:47:16.186844   10844 logs.go:123] Gathering logs for Docker ...
	I0603 05:47:16.186844   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 05:47:16.219860   10844 command_runner.go:130] > Jun 03 12:44:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:24 minikube cri-dockerd[224]: time="2024-06-03T12:44:24Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:16.220802   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.220929   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.220974   10844 command_runner.go:130] > Jun 03 12:44:25 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.220974   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.221012   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube cri-dockerd[402]: time="2024-06-03T12:44:27Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:27 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 05:47:16.221107   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:29 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube cri-dockerd[423]: time="2024-06-03T12:44:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:30 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221219   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 05:47:16.221344   10844 command_runner.go:130] > Jun 03 12:44:32 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.221405   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:16.221405   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.771561443Z" level=info msg="Starting up"
	I0603 05:47:16.221467   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.772532063Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:16.221467   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[653]: time="2024-06-03T12:45:17.773624286Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.808811320Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832632417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832678118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832736520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:16.221510   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.832759220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221610   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833244930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221610   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833408234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221692   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833576137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221692   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833613138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833628938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.833638438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.834164449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221735   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.835025267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221823   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838417938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221823   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838538341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.221900   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838679444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.838769945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839497061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839606563Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:16.221944   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.839624563Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845634889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845777492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845800892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:16.222057   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845816092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:16.222136   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845839393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.845906994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846346204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846529007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.222177   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846620809Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846640810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846654910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846667810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846680811Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846694511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222265   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846708411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846721811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846733912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846744912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.222388   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846773112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222472   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846788913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222472   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846800513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846828814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846839914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846851514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846862614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846874615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222518   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846886615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846899615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846955316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846981817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.846994617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222618   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847010117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847031418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847043818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847054818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:16.222734   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847167021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:16.222833   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847253922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:16.222833   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847272023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847284523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847328424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847344024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:16.222893   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847358325Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847619130Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847749533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847791734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:17 multinode-316400 dockerd[659]: time="2024-06-03T12:45:17.847827434Z" level=info msg="containerd successfully booted in 0.041960s"
	I0603 05:47:16.223004   10844 command_runner.go:130] > Jun 03 12:45:18 multinode-316400 dockerd[653]: time="2024-06-03T12:45:18.826654226Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.061854651Z" level=info msg="Loading containers: start."
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.457966557Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.535734595Z" level=info msg="Loading containers: done."
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.564526187Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.565436112Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624671041Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 dockerd[653]: time="2024-06-03T12:45:19.624909048Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:19 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:16.223118   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.830891929Z" level=info msg="Processing signal 'terminated'"
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 systemd[1]: Stopping Docker Application Container Engine...
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.834353661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835003667Z" level=info msg="Daemon shutdown complete"
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835050568Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 05:47:16.223355   10844 command_runner.go:130] > Jun 03 12:45:45 multinode-316400 dockerd[653]: time="2024-06-03T12:45:45.835251069Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: docker.service: Deactivated successfully.
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Stopped Docker Application Container Engine.
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 systemd[1]: Starting Docker Application Container Engine...
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.915575270Z" level=info msg="Starting up"
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.916682280Z" level=info msg="containerd not running, starting managed containerd"
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:46.918008093Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1054
	I0603 05:47:16.223470   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.949666883Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972231590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972400191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972438091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972452692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223585   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972476692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972488892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972615793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972703794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972759294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 05:47:16.223702   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972796595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.972955396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975272817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975362818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 05:47:16.223826   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975484219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975568720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975596620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975613521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975624221Z" level=info msg="metadata content store policy set" policy=shared
	I0603 05:47:16.223939   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.975878823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976092925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976118125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976134225Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976151125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 05:47:16.224052   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976204926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976547129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976675630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976808532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976873932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 05:47:16.224180   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976891332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224272   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976903432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976914332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976926833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976940833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224300   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976953033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976964333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.976974233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977000233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977014733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977026033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224390   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977037834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977048934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977060334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977071734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977082834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977094934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224506   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977108234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977131234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977142235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977155935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 05:47:16.224618   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977174635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977186435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977200035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977321036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977450137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 05:47:16.224744   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977475038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 05:47:16.224879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977491338Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 05:47:16.224879   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977502538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977515638Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977525838Z" level=info msg="NRI interface is disabled by configuration."
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977793041Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977944442Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 05:47:16.225004   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.977993342Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:46 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:46.978082843Z" level=info msg="containerd successfully booted in 0.029905s"
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.958072125Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:47 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:47.992700342Z" level=info msg="Loading containers: start."
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.284992921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 05:47:16.225087   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.371138910Z" level=info msg="Loading containers: done."
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397139049Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.397280650Z" level=info msg="Daemon has completed initialization"
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.446056397Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 systemd[1]: Started Docker Application Container Engine.
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:48 multinode-316400 dockerd[1048]: time="2024-06-03T12:45:48.451246244Z" level=info msg="API listen on [::]:2376"
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 05:47:16.225212   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start docker client with request timeout 0s"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Loaded network plugin cni"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 05:47:16.225321   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:49Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:49 multinode-316400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 05:47:16.225434   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4hrc6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e\""
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-pm79t_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4\""
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729841851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.729937752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225547   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.730811260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225636   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.732365774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831787585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831902586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.831956587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225671   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.832202689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912447024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912562925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.912807128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225770   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31bce861be7b718722ced8a5abaaaf80e01691edf1873a82a8467609ec04d725/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948298553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948519555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948541855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225879   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:55.948688056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.225993   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5938c827a45b5720a54e096dfe79ff973a8724c39f2dfa24cf2bc4e1f3a14c6e/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226022   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226022   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:45:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226022   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211361864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211466465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211486965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.211585266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226111   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.402470615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403083421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.403253922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.410900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226224   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474478075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.474699377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.475925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486666687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226332   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486786488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226418   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.486800688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226418   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 dockerd[1054]: time="2024-06-03T12:45:56.487211092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226447   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:00Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 05:47:16.226447   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566084538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226447   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566367341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.566479442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.567551052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.582198686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586189923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226540   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.586494625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.587318633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636541684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636617385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636635485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226661   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:01.636992688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226774   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226774   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.226826   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.129414501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226826   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130210008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130291809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.130470711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147517467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.226866   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.147958771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.226967   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148118573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.226967   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.148818379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227025   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:46:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.227025   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423300695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227025   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.423802099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227101   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.424025901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227246   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:02.427457533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1048]: time="2024-06-03T12:46:32.704571107Z" level=info msg="ignoring event" container=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705364020Z" level=info msg="shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705622124Z" level=warning msg="cleaning up after shim disconnected" id=eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.705874328Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:32.728397491Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129026230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129403835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129427335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:46:45 multinode-316400 dockerd[1054]: time="2024-06-03T12:46:45.129696138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309701115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309935818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.309957118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.310113120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316797286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.316993688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317155090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.317526994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9/resolv.conf as [nameserver 172.17.80.1]"
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 cri-dockerd[1274]: time="2024-06-03T12:47:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899305562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227280   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899391863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899429263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.899555364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.936994844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937073745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937090545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 dockerd[1054]: time="2024-06-03T12:47:05.937338347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.227826   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228116   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228183   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228183   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:08 multinode-316400 dockerd[1048]: 2024/06/03 12:47:08 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.228260   10844 command_runner.go:130] > Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 05:47:16.261521   10844 logs.go:123] Gathering logs for dmesg ...
	I0603 05:47:16.261521   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 05:47:16.285629   10844 command_runner.go:130] > [Jun 3 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.129332] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.024453] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 05:47:16.286495   10844 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +0.058085] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +0.021687] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 05:47:16.286637   10844 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 05:47:16.286637   10844 command_runner.go:130] > [  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 05:47:16.286698   10844 command_runner.go:130] > [Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0603 05:47:16.286698   10844 command_runner.go:130] > [  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	I0603 05:47:16.286793   10844 command_runner.go:130] > [  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	I0603 05:47:16.288838   10844 logs.go:123] Gathering logs for coredns [8280b3904678] ...
	I0603 05:47:16.288838   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8280b3904678"
	I0603 05:47:16.321653   10844 command_runner.go:130] > .:53
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:16.321734   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:16.321734   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 127.0.0.1:42160 - 49231 "HINFO IN 7758649785632377755.6167658315586765337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046714522s
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 10.244.1.2:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279598s
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 10.244.1.2:58454 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208411566s
	I0603 05:47:16.321734   10844 command_runner.go:130] > [INFO] 10.244.1.2:41741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.13626297s
	I0603 05:47:16.321815   10844 command_runner.go:130] > [INFO] 10.244.1.2:34878 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.105138942s
	I0603 05:47:16.321815   10844 command_runner.go:130] > [INFO] 10.244.0.3:55537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268797s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.0.3:46426 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000881s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.0.3:52879 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174998s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.0.3:43420 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000100699s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.1.2:58392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115599s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.1.2:44885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024455563s
	I0603 05:47:16.321849   10844 command_runner.go:130] > [INFO] 10.244.1.2:42255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000337996s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:41386 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245097s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:55181 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012426179s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:35256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164099s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:57960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110199s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.1.2:37875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160198s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.0.3:59586 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165898s
	I0603 05:47:16.321944   10844 command_runner.go:130] > [INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	I0603 05:47:16.322107   10844 command_runner.go:130] > [INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	I0603 05:47:16.322107   10844 command_runner.go:130] > [INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	I0603 05:47:16.322147   10844 command_runner.go:130] > [INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	I0603 05:47:16.322250   10844 command_runner.go:130] > [INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	I0603 05:47:16.322339   10844 command_runner.go:130] > [INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 05:47:16.322419   10844 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 05:47:16.325696   10844 logs.go:123] Gathering logs for kube-controller-manager [3d7dc29a5791] ...
	I0603 05:47:16.325696   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d7dc29a5791"
	I0603 05:47:16.351742   10844 command_runner.go:130] ! I0603 12:22:58.709734       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:16.351742   10844 command_runner.go:130] ! I0603 12:22:59.476409       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:16.352158   10844 command_runner.go:130] ! I0603 12:22:59.477144       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.352287   10844 command_runner.go:130] ! I0603 12:22:59.479107       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:22:59.479482       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:22:59.480446       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:22:59.480646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:16.352353   10844 command_runner.go:130] ! I0603 12:23:03.879622       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:16.352413   10844 command_runner.go:130] ! I0603 12:23:03.880293       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:16.352413   10844 command_runner.go:130] ! I0603 12:23:03.880027       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:16.352498   10844 command_runner.go:130] ! I0603 12:23:03.898013       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.898158       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.898213       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.919140       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.919340       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.919371       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.929290       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.929541       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:03.981652       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960621       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960663       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960672       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960922       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.960933       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.982079       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.983455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:16.352527   10844 command_runner.go:130] ! I0603 12:23:13.983548       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:16.353068   10844 command_runner.go:130] ! E0603 12:23:14.000699       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:16.353068   10844 command_runner.go:130] ! I0603 12:23:14.000725       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:16.353118   10844 command_runner.go:130] ! I0603 12:23:14.000737       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.000744       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.014097       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.014549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.014579       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.039289       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.039520       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:16.353180   10844 command_runner.go:130] ! I0603 12:23:14.039555       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.066064       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.066460       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.067547       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.080694       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.080928       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.080942       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.090915       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.091127       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.112300       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.112981       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.113168       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.115290       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.115472       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.115914       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:16.355020   10844 command_runner.go:130] ! I0603 12:23:14.116287       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:16.356807   10844 command_runner.go:130] ! I0603 12:23:14.138094       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:16.357147   10844 command_runner.go:130] ! I0603 12:23:14.138554       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:16.357258   10844 command_runner.go:130] ! I0603 12:23:14.138571       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:16.357469   10844 command_runner.go:130] ! I0603 12:23:14.156457       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:16.357532   10844 command_runner.go:130] ! I0603 12:23:14.157066       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:16.357532   10844 command_runner.go:130] ! I0603 12:23:14.157201       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:16.357532   10844 command_runner.go:130] ! I0603 12:23:14.299010       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:16.358579   10844 command_runner.go:130] ! I0603 12:23:14.299494       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.299867       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.448653       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.448790       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.448807       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.598920       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:16.358668   10844 command_runner.go:130] ! I0603 12:23:14.599459       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:16.358742   10844 command_runner.go:130] ! I0603 12:23:14.599613       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747435       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747595       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747608       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.747617       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.794967       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.795092       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.795473       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.795623       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.796055       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:16.358778   10844 command_runner.go:130] ! I0603 12:23:14.947799       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:14.947966       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:14.948148       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:15.253614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:15.253800       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:16.358918   10844 command_runner.go:130] ! I0603 12:23:15.253851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:16.358999   10844 command_runner.go:130] ! W0603 12:23:15.253890       1 shared_informer.go:597] resyncPeriod 20h27m39.878927139s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:16.358999   10844 command_runner.go:130] ! I0603 12:23:15.254123       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:16.359199   10844 command_runner.go:130] ! I0603 12:23:15.254392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:16.359264   10844 command_runner.go:130] ! I0603 12:23:15.254514       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:16.359264   10844 command_runner.go:130] ! I0603 12:23:15.255105       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:16.359264   10844 command_runner.go:130] ! I0603 12:23:15.255639       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:16.359342   10844 command_runner.go:130] ! I0603 12:23:15.255930       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:16.359342   10844 command_runner.go:130] ! I0603 12:23:15.256059       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:16.359342   10844 command_runner.go:130] ! I0603 12:23:15.256381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:16.359404   10844 command_runner.go:130] ! I0603 12:23:15.256652       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:16.359404   10844 command_runner.go:130] ! I0603 12:23:15.256978       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:16.359404   10844 command_runner.go:130] ! I0603 12:23:15.257200       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:16.359470   10844 command_runner.go:130] ! I0603 12:23:15.257574       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:16.359470   10844 command_runner.go:130] ! I0603 12:23:15.257864       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:16.359470   10844 command_runner.go:130] ! I0603 12:23:15.258216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:16.359533   10844 command_runner.go:130] ! W0603 12:23:15.258585       1 shared_informer.go:597] resyncPeriod 18h8m55.919288475s is smaller than resyncCheckPeriod 22h4m12.726278312s and the informer has already started. Changing it to 22h4m12.726278312s
	I0603 05:47:16.359533   10844 command_runner.go:130] ! I0603 12:23:15.258823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:16.359533   10844 command_runner.go:130] ! I0603 12:23:15.258977       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:16.359533   10844 command_runner.go:130] ! I0603 12:23:15.259197       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259267       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259531       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259645       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.259859       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:16.359595   10844 command_runner.go:130] ! I0603 12:23:15.400049       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.400251       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.400362       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.550028       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:16.359660   10844 command_runner.go:130] ! I0603 12:23:15.550108       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:16.359717   10844 command_runner.go:130] ! I0603 12:23:15.550118       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:16.359779   10844 command_runner.go:130] ! I0603 12:23:15.744039       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:16.359846   10844 command_runner.go:130] ! I0603 12:23:15.744209       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:16.359909   10844 command_runner.go:130] ! I0603 12:23:15.744288       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:16.359909   10844 command_runner.go:130] ! I0603 12:23:15.744381       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:16.359966   10844 command_runner.go:130] ! E0603 12:23:15.795003       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:16.359966   10844 command_runner.go:130] ! I0603 12:23:15.795251       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:16.360044   10844 command_runner.go:130] ! I0603 12:23:15.951102       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:16.360044   10844 command_runner.go:130] ! I0603 12:23:15.951175       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:15.951186       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:16.103214       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:16.103538       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:16.360134   10844 command_runner.go:130] ! I0603 12:23:16.103703       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:16.360244   10844 command_runner.go:130] ! I0603 12:23:16.152626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:16.360244   10844 command_runner.go:130] ! I0603 12:23:16.152712       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:16.360330   10844 command_runner.go:130] ! I0603 12:23:16.153331       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:16.360369   10844 command_runner.go:130] ! I0603 12:23:16.153697       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:16.360437   10844 command_runner.go:130] ! I0603 12:23:16.153983       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:16.360437   10844 command_runner.go:130] ! I0603 12:23:16.154153       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:16.360437   10844 command_runner.go:130] ! I0603 12:23:16.154254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.154552       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.155315       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.155447       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:16.360532   10844 command_runner.go:130] ! I0603 12:23:16.155494       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360623   10844 command_runner.go:130] ! I0603 12:23:16.156193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:16.360668   10844 command_runner.go:130] ! I0603 12:23:16.156626       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:16.360710   10844 command_runner.go:130] ! I0603 12:23:16.156664       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:16.360710   10844 command_runner.go:130] ! I0603 12:23:16.298448       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:16.360764   10844 command_runner.go:130] ! I0603 12:23:16.298743       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:16.360764   10844 command_runner.go:130] ! I0603 12:23:16.298803       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:16.360829   10844 command_runner.go:130] ! I0603 12:23:16.457482       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:16.360829   10844 command_runner.go:130] ! I0603 12:23:16.458106       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:16.360829   10844 command_runner.go:130] ! I0603 12:23:16.458255       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.603442       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.603819       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.603900       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.795254       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.795875       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:16.360913   10844 command_runner.go:130] ! I0603 12:23:16.948611       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:16.361051   10844 command_runner.go:130] ! I0603 12:23:16.948652       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:16.361051   10844 command_runner.go:130] ! I0603 12:23:16.948726       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:16.949131       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.206218       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.206341       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.206354       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:16.361108   10844 command_runner.go:130] ! I0603 12:23:17.443986       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.444026       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.444652       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.444677       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.702103       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.702517       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:16.361208   10844 command_runner.go:130] ! I0603 12:23:17.702550       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:17.851156       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:17.851357       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:17.851370       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:18.000740       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:18.003147       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:16.361321   10844 command_runner.go:130] ! I0603 12:23:18.003208       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:16.361435   10844 command_runner.go:130] ! I0603 12:23:18.013736       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:16.361435   10844 command_runner.go:130] ! I0603 12:23:18.042698       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:16.361435   10844 command_runner.go:130] ! I0603 12:23:18.049024       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.049393       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.049619       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.052020       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.052116       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.058451       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:16.361546   10844 command_runner.go:130] ! I0603 12:23:18.063949       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.063997       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.064022       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.064027       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.064033       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.076606       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.097633       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:16.361659   10844 command_runner.go:130] ! I0603 12:23:18.097738       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098286       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098375       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.098877       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.100321       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.100587       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.103320       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.103450       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:16.361777   10844 command_runner.go:130] ! I0603 12:23:18.103468       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.107067       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.108430       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.112806       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.113161       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.114212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400" podCIDRs=["10.244.0.0/24"]
	I0603 05:47:16.361898   10844 command_runner.go:130] ! I0603 12:23:18.114620       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.116662       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.120085       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.129657       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.139133       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.141026       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.152060       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:16.362025   10844 command_runner.go:130] ! I0603 12:23:18.154508       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.154683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.156204       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.157708       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.159229       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.202824       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:16.362142   10844 command_runner.go:130] ! I0603 12:23:18.204977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.213840       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.215208       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.245546       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.260135       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.303335       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.744986       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:16.362257   10844 command_runner.go:130] ! I0603 12:23:18.745263       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:18.809407       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:19.424454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="514.197479ms"
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:19.464600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.963409ms"
	I0603 05:47:16.362383   10844 command_runner.go:130] ! I0603 12:23:19.466851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="298.789µs"
	I0603 05:47:16.362504   10844 command_runner.go:130] ! I0603 12:23:19.498655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.895µs"
	I0603 05:47:16.362504   10844 command_runner.go:130] ! I0603 12:23:20.284713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.277959ms"
	I0603 05:47:16.362547   10844 command_runner.go:130] ! I0603 12:23:20.306638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.621245ms"
	I0603 05:47:16.362547   10844 command_runner.go:130] ! I0603 12:23:20.307533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.598µs"
	I0603 05:47:16.362547   10844 command_runner.go:130] ! I0603 12:23:30.907970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.098µs"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:30.939967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.798µs"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:32.780060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:32.836151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.129991ms"
	I0603 05:47:16.362658   10844 command_runner.go:130] ! I0603 12:23:32.836508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.302µs"
	I0603 05:47:16.362790   10844 command_runner.go:130] ! I0603 12:23:33.100283       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 05:47:16.362790   10844 command_runner.go:130] ! I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:16.362790   10844 command_runner.go:130] ! I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 05:47:16.362888   10844 command_runner.go:130] ! I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 05:47:16.362986   10844 command_runner.go:130] ! I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 05:47:16.363084   10844 command_runner.go:130] ! I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363084   10844 command_runner.go:130] ! I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:16.363157   10844 command_runner.go:130] ! I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 05:47:16.363225   10844 command_runner.go:130] ! I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:16.363225   10844 command_runner.go:130] ! I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363350   10844 command_runner.go:130] ! I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 05:47:16.363470   10844 command_runner.go:130] ! I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.363470   10844 command_runner.go:130] ! I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:16.382240   10844 logs.go:123] Gathering logs for kindnet [3a08a76e2a79] ...
	I0603 05:47:16.382240   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a08a76e2a79"
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.050827       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051229       1 main.go:107] hostIP = 172.17.95.88
	I0603 05:47:16.409867   10844 command_runner.go:130] ! podIP = 172.17.95.88
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051377       1 main.go:116] setting mtu 1500 for CNI 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051397       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:03.051417       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.483366       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.505262       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.505362       1 main.go:227] handling current node
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506144       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.94.201 Flags: [] Table: 0} 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506651       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506661       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:33.506765       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:43.512187       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:43.512270       1 main.go:227] handling current node
	I0603 05:47:16.409867   10844 command_runner.go:130] ! I0603 12:46:43.512283       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:43.512290       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:43.512906       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:43.512944       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:53.529047       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.410994   10844 command_runner.go:130] ! I0603 12:46:53.529290       1 main.go:227] handling current node
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.529365       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.529466       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.529947       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.411105   10844 command_runner.go:130] ! I0603 12:46:53.530023       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545370       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545467       1 main.go:227] handling current node
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545481       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.411191   10844 command_runner.go:130] ! I0603 12:47:03.545487       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:03.545994       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:03.546064       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:13.562103       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:13.563112       1 main.go:227] handling current node
	I0603 05:47:16.411249   10844 command_runner.go:130] ! I0603 12:47:13.563361       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:16.411317   10844 command_runner.go:130] ! I0603 12:47:13.563375       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:16.411317   10844 command_runner.go:130] ! I0603 12:47:13.563657       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:16.411317   10844 command_runner.go:130] ! I0603 12:47:13.564016       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:16.415628   10844 logs.go:123] Gathering logs for describe nodes ...
	I0603 05:47:16.415658   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 05:47:16.623778   10844 command_runner.go:130] > Name:               multinode-316400
	I0603 05:47:16.624531   10844 command_runner.go:130] > Roles:              control-plane
	I0603 05:47:16.624531   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:16.624531   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:16.624531   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 05:47:16.624637   10844 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 05:47:16.624763   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:16.624763   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:16.624763   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:16.624763   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	I0603 05:47:16.624826   10844 command_runner.go:130] > Taints:             <none>
	I0603 05:47:16.624826   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:16.624826   10844 command_runner.go:130] > Lease:
	I0603 05:47:16.624826   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400
	I0603 05:47:16.624826   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:16.624870   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:47:12 +0000
	I0603 05:47:16.624870   10844 command_runner.go:130] > Conditions:
	I0603 05:47:16.624870   10844 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 05:47:16.624870   10844 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 05:47:16.624931   10844 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 05:47:16.624931   10844 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 05:47:16.624991   10844 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 05:47:16.624991   10844 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	I0603 05:47:16.624991   10844 command_runner.go:130] > Addresses:
	I0603 05:47:16.624991   10844 command_runner.go:130] >   InternalIP:  172.17.95.88
	I0603 05:47:16.624991   10844 command_runner.go:130] >   Hostname:    multinode-316400
	I0603 05:47:16.624991   10844 command_runner.go:130] > Capacity:
	I0603 05:47:16.624991   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.625075   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.625075   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.625075   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.625075   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.625075   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:16.625075   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.625075   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.625136   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.625136   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.625136   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.625136   10844 command_runner.go:130] > System Info:
	I0603 05:47:16.625136   10844 command_runner.go:130] >   Machine ID:                 babca97119de4d6fa999cc452dbf962d
	I0603 05:47:16.625136   10844 command_runner.go:130] >   System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:16.625216   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:16.625216   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:16.625523   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:16.625564   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:16.625564   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:16.625564   10844 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 05:47:16.625564   10844 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 05:47:16.625625   10844 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 05:47:16.625625   10844 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:16.625625   10844 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 05:47:16.625625   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-pm79t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:16.625625   10844 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0603 05:47:16.625734   10844 command_runner.go:130] >   kube-system                 etcd-multinode-316400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0603 05:47:16.625734   10844 command_runner.go:130] >   kube-system                 kindnet-4hpsl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0603 05:47:16.625763   10844 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-316400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	I0603 05:47:16.625808   10844 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-316400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 05:47:16.625808   10844 command_runner.go:130] >   kube-system                 kube-proxy-ks64x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0603 05:47:16.625808   10844 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-316400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 05:47:16.625882   10844 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0603 05:47:16.625882   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:16.625882   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:16.625882   10844 command_runner.go:130] >   Resource           Requests     Limits
	I0603 05:47:16.625882   10844 command_runner.go:130] >   --------           --------     ------
	I0603 05:47:16.625882   10844 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0603 05:47:16.625882   10844 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0603 05:47:16.625970   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 05:47:16.625970   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 05:47:16.625970   10844 command_runner.go:130] > Events:
	I0603 05:47:16.625970   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:16.625970   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:16.626028   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.626109   10844 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 05:47:16.626109   10844 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:16.626137   10844 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-316400 status is now: NodeReady
	I0603 05:47:16.626137   10844 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0603 05:47:16.626137   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.626189   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	I0603 05:47:16.626189   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.626189   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	I0603 05:47:16.626226   10844 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	I0603 05:47:16.626248   10844 command_runner.go:130] > Name:               multinode-316400-m02
	I0603 05:47:16.626248   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:16.626248   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:16.626248   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m02
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:16.626286   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_26_18_0700
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:16.626377   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:16.626377   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:16.626430   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:16.626430   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:26:17 +0000
	I0603 05:47:16.626430   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:16.626464   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:16.626464   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:16.626464   10844 command_runner.go:130] > Lease:
	I0603 05:47:16.626464   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m02
	I0603 05:47:16.626524   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:16.626524   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:47 +0000
	I0603 05:47:16.626524   10844 command_runner.go:130] > Conditions:
	I0603 05:47:16.626605   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:16.626605   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:16.626605   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:42:38 +0000   Mon, 03 Jun 2024 12:46:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.626667   10844 command_runner.go:130] > Addresses:
	I0603 05:47:16.626667   10844 command_runner.go:130] >   InternalIP:  172.17.94.201
	I0603 05:47:16.626728   10844 command_runner.go:130] >   Hostname:    multinode-316400-m02
	I0603 05:47:16.626728   10844 command_runner.go:130] > Capacity:
	I0603 05:47:16.626728   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.626728   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.626728   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.626728   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.626728   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.626728   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:16.626801   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.626801   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.626801   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.626801   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.626801   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.626801   10844 command_runner.go:130] > System Info:
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Machine ID:                 6dfd6d7a84bd4993a436e28fabcd5bcd
	I0603 05:47:16.626861   10844 command_runner.go:130] >   System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Boot ID:                    962d0492-2144-4980-9fec-a02c1a24fa1a
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:16.626861   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:16.626861   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:16.626927   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:16.626927   10844 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 05:47:16.626927   10844 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 05:47:16.626988   10844 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 05:47:16.626988   10844 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:16.626988   10844 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 05:47:16.626988   10844 command_runner.go:130] >   default                     busybox-fc5497c4f-hmxqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:16.627053   10844 command_runner.go:130] >   kube-system                 kindnet-789v5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0603 05:47:16.627053   10844 command_runner.go:130] >   kube-system                 kube-proxy-z26hc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 05:47:16.627053   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:16.627053   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:16.627053   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:16.627114   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:16.627114   10844 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 05:47:16.627114   10844 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 05:47:16.627114   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:16.627114   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:16.627114   10844 command_runner.go:130] > Events:
	I0603 05:47:16.627201   10844 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 05:47:16.627201   10844 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 05:47:16.627201   10844 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0603 05:47:16.627201   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.627260   10844 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:16.627322   10844 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	I0603 05:47:16.627322   10844 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	I0603 05:47:16.627322   10844 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-316400-m02 status is now: NodeNotReady
	I0603 05:47:16.627322   10844 command_runner.go:130] > Name:               multinode-316400-m03
	I0603 05:47:16.627376   10844 command_runner.go:130] > Roles:              <none>
	I0603 05:47:16.627376   10844 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     kubernetes.io/hostname=multinode-316400-m03
	I0603 05:47:16.627376   10844 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/name=multinode-316400
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	I0603 05:47:16.627422   10844 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 05:47:16.627490   10844 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 05:47:16.627490   10844 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 05:47:16.627490   10844 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 05:47:16.627490   10844 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	I0603 05:47:16.627490   10844 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 05:47:16.627655   10844 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 05:47:16.627655   10844 command_runner.go:130] > Unschedulable:      false
	I0603 05:47:16.627655   10844 command_runner.go:130] > Lease:
	I0603 05:47:16.627655   10844 command_runner.go:130] >   HolderIdentity:  multinode-316400-m03
	I0603 05:47:16.627655   10844 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 05:47:16.627655   10844 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	I0603 05:47:16.627716   10844 command_runner.go:130] > Conditions:
	I0603 05:47:16.627716   10844 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 05:47:16.627716   10844 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 05:47:16.627716   10844 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627780   10844 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627780   10844 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627780   10844 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 05:47:16.627839   10844 command_runner.go:130] > Addresses:
	I0603 05:47:16.627839   10844 command_runner.go:130] >   InternalIP:  172.17.87.60
	I0603 05:47:16.627839   10844 command_runner.go:130] >   Hostname:    multinode-316400-m03
	I0603 05:47:16.627839   10844 command_runner.go:130] > Capacity:
	I0603 05:47:16.627839   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.627839   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.627839   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.627904   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.627904   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.627904   10844 command_runner.go:130] > Allocatable:
	I0603 05:47:16.627904   10844 command_runner.go:130] >   cpu:                2
	I0603 05:47:16.627972   10844 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 05:47:16.628091   10844 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 05:47:16.628091   10844 command_runner.go:130] >   memory:             2164264Ki
	I0603 05:47:16.628091   10844 command_runner.go:130] >   pods:               110
	I0603 05:47:16.628091   10844 command_runner.go:130] > System Info:
	I0603 05:47:16.628091   10844 command_runner.go:130] >   Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	I0603 05:47:16.628170   10844 command_runner.go:130] >   System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	I0603 05:47:16.628170   10844 command_runner.go:130] >   Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	I0603 05:47:16.628170   10844 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 05:47:16.628170   10844 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 05:47:16.628170   10844 command_runner.go:130] >   Operating System:           linux
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Architecture:               amd64
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 05:47:16.628238   10844 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 05:47:16.628238   10844 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 05:47:16.628299   10844 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 05:47:16.628299   10844 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 05:47:16.628299   10844 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 05:47:16.628299   10844 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 05:47:16.628299   10844 command_runner.go:130] >   kube-system                 kindnet-2g66r       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0603 05:47:16.628367   10844 command_runner.go:130] >   kube-system                 kube-proxy-dl97g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0603 05:47:16.628367   10844 command_runner.go:130] > Allocated resources:
	I0603 05:47:16.628367   10844 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 05:47:16.628367   10844 command_runner.go:130] >   Resource           Requests   Limits
	I0603 05:47:16.628367   10844 command_runner.go:130] >   --------           --------   ------
	I0603 05:47:16.628444   10844 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 05:47:16.628444   10844 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 05:47:16.628444   10844 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:16.628444   10844 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 05:47:16.628444   10844 command_runner.go:130] > Events:
	I0603 05:47:16.628444   10844 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 05:47:16.628505   10844 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 05:47:16.628505   10844 command_runner.go:130] >   Normal  Starting                 5m45s                  kube-proxy       
	I0603 05:47:16.628505   10844 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 05:47:16.628587   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.628587   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:16.628649   10844 command_runner.go:130] >   Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	I0603 05:47:16.628707   10844 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	I0603 05:47:16.628766   10844 command_runner.go:130] >   Normal  RegisteredNode           5m48s                  node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:16.628766   10844 command_runner.go:130] >   Normal  NodeReady                5m40s                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	I0603 05:47:16.628766   10844 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	I0603 05:47:16.628824   10844 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	I0603 05:47:16.639563   10844 logs.go:123] Gathering logs for kube-apiserver [a9b10f4d479a] ...
	I0603 05:47:16.639563   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9b10f4d479a"
	I0603 05:47:16.670721   10844 command_runner.go:130] ! I0603 12:45:57.403757       1 options.go:221] external host was not specified, using 172.17.95.88
	I0603 05:47:16.670721   10844 command_runner.go:130] ! I0603 12:45:57.406924       1 server.go:148] Version: v1.30.1
	I0603 05:47:16.671154   10844 command_runner.go:130] ! I0603 12:45:57.407254       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.671208   10844 command_runner.go:130] ! I0603 12:45:58.053920       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 05:47:16.671452   10844 command_runner.go:130] ! I0603 12:45:58.058845       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 05:47:16.671524   10844 command_runner.go:130] ! I0603 12:45:58.058955       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 05:47:16.671524   10844 command_runner.go:130] ! I0603 12:45:58.059338       1 instance.go:299] Using reconciler: lease
	I0603 05:47:16.671567   10844 command_runner.go:130] ! I0603 12:45:58.060201       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:16.671590   10844 command_runner.go:130] ! I0603 12:45:58.875148       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 05:47:16.671590   10844 command_runner.go:130] ! W0603 12:45:58.875563       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671639   10844 command_runner.go:130] ! I0603 12:45:59.142148       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 05:47:16.671639   10844 command_runner.go:130] ! I0603 12:45:59.142832       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 05:47:16.671639   10844 command_runner.go:130] ! I0603 12:45:59.377455       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! I0603 12:45:59.573170       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! I0603 12:45:59.586634       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 05:47:16.671707   10844 command_runner.go:130] ! W0603 12:45:59.586771       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! W0603 12:45:59.586784       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.671707   10844 command_runner.go:130] ! I0603 12:45:59.588425       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 05:47:16.671771   10844 command_runner.go:130] ! W0603 12:45:59.588531       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671771   10844 command_runner.go:130] ! I0603 12:45:59.590497       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 05:47:16.671771   10844 command_runner.go:130] ! I0603 12:45:59.591820       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 05:47:16.671771   10844 command_runner.go:130] ! W0603 12:45:59.591914       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.591924       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! I0603 12:45:59.594253       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.594382       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! I0603 12:45:59.595963       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.596105       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.596117       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.671827   10844 command_runner.go:130] ! I0603 12:45:59.597347       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 05:47:16.671827   10844 command_runner.go:130] ! W0603 12:45:59.597459       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671972   10844 command_runner.go:130] ! W0603 12:45:59.597610       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.671972   10844 command_runner.go:130] ! I0603 12:45:59.598635       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 05:47:16.671972   10844 command_runner.go:130] ! I0603 12:45:59.601013       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601125       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601136       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672032   10844 command_runner.go:130] ! I0603 12:45:59.601685       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601835       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672032   10844 command_runner.go:130] ! W0603 12:45:59.601851       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672086   10844 command_runner.go:130] ! I0603 12:45:59.602906       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 05:47:16.672086   10844 command_runner.go:130] ! W0603 12:45:59.603027       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 05:47:16.672086   10844 command_runner.go:130] ! I0603 12:45:59.605451       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 05:47:16.672134   10844 command_runner.go:130] ! W0603 12:45:59.605590       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672134   10844 command_runner.go:130] ! W0603 12:45:59.605603       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672291   10844 command_runner.go:130] ! I0603 12:45:59.606823       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 05:47:16.672353   10844 command_runner.go:130] ! W0603 12:45:59.607057       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672353   10844 command_runner.go:130] ! W0603 12:45:59.607073       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672353   10844 command_runner.go:130] ! I0603 12:45:59.610997       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 05:47:16.672353   10844 command_runner.go:130] ! W0603 12:45:59.611141       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672411   10844 command_runner.go:130] ! W0603 12:45:59.611153       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672411   10844 command_runner.go:130] ! I0603 12:45:59.615262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 05:47:16.672411   10844 command_runner.go:130] ! I0603 12:45:59.618444       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 05:47:16.672411   10844 command_runner.go:130] ! W0603 12:45:59.618592       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 05:47:16.672484   10844 command_runner.go:130] ! W0603 12:45:59.618802       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672484   10844 command_runner.go:130] ! I0603 12:45:59.633959       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 05:47:16.672484   10844 command_runner.go:130] ! W0603 12:45:59.634179       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! W0603 12:45:59.634387       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! I0603 12:45:59.641016       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 05:47:16.672579   10844 command_runner.go:130] ! W0603 12:45:59.641203       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! W0603 12:45:59.641390       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 05:47:16.672579   10844 command_runner.go:130] ! I0603 12:45:59.643262       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 05:47:16.672643   10844 command_runner.go:130] ! W0603 12:45:59.643611       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672643   10844 command_runner.go:130] ! I0603 12:45:59.665282       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 05:47:16.672643   10844 command_runner.go:130] ! W0603 12:45:59.665339       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 05:47:16.672643   10844 command_runner.go:130] ! I0603 12:46:00.321072       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 05:47:16.672643   10844 command_runner.go:130] ! I0603 12:46:00.321338       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 05:47:16.672726   10844 command_runner.go:130] ! I0603 12:46:00.321510       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.672726   10844 command_runner.go:130] ! I0603 12:46:00.321684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:16.672726   10844 command_runner.go:130] ! I0603 12:46:00.322441       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.324839       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.324963       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.325383       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.331772       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.331819       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 05:47:16.672791   10844 command_runner.go:130] ! I0603 12:46:00.331950       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 05:47:16.672874   10844 command_runner.go:130] ! I0603 12:46:00.331975       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 05:47:16.672874   10844 command_runner.go:130] ! I0603 12:46:00.331996       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 05:47:16.672874   10844 command_runner.go:130] ! I0603 12:46:00.332381       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 05:47:16.673051   10844 command_runner.go:130] ! I0603 12:46:00.332390       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332464       1 controller.go:139] Starting OpenAPI controller
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332488       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332501       1 naming_controller.go:291] Starting NamingConditionController
	I0603 05:47:16.673112   10844 command_runner.go:130] ! I0603 12:46:00.332512       1 establishing_controller.go:76] Starting EstablishingController
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.332528       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.332538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.332550       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.321340       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:16.673180   10844 command_runner.go:130] ! I0603 12:46:00.325911       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.348350       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.348672       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.325922       1 available_controller.go:423] Starting AvailableConditionController
	I0603 05:47:16.673249   10844 command_runner.go:130] ! I0603 12:46:00.350192       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.325939       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.325949       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.368845       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 05:47:16.673317   10844 command_runner.go:130] ! I0603 12:46:00.368878       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.451943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 05:47:16.673410   10844 command_runner.go:130] ! I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 05:47:16.673531   10844 command_runner.go:130] ! I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 05:47:16.673591   10844 command_runner.go:130] ! I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 05:47:16.673591   10844 command_runner.go:130] ! I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 05:47:16.673591   10844 command_runner.go:130] ! I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 05:47:16.673665   10844 command_runner.go:130] ! I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 05:47:16.673733   10844 command_runner.go:130] ! I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 05:47:16.673733   10844 command_runner.go:130] ! I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 05:47:16.673733   10844 command_runner.go:130] ! W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 05:47:16.673733   10844 command_runner.go:130] ! I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 05:47:16.673800   10844 command_runner.go:130] ! I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 05:47:16.673870   10844 command_runner.go:130] ! I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 05:47:16.673870   10844 command_runner.go:130] ! I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 05:47:16.673870   10844 command_runner.go:130] ! W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	I0603 05:47:16.683351   10844 logs.go:123] Gathering logs for kube-scheduler [f39be6db7a1f] ...
	I0603 05:47:16.683351   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f39be6db7a1f"
	I0603 05:47:16.717960   10844 command_runner.go:130] ! I0603 12:22:59.604855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:16.717960   10844 command_runner.go:130] ! W0603 12:23:00.885974       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.886217       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.886249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.886344       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.957357       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.957471       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962196       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962588       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:16.718083   10844 command_runner.go:130] ! I0603 12:23:00.962719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.975786       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.976030       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.976627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.976720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.977093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.977211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.977871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! E0603 12:23:00.978108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.718083   10844 command_runner.go:130] ! W0603 12:23:00.978352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.718675   10844 command_runner.go:130] ! E0603 12:23:00.978554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.718675   10844 command_runner.go:130] ! W0603 12:23:00.978915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.718675   10844 command_runner.go:130] ! E0603 12:23:00.979166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.718812   10844 command_runner.go:130] ! W0603 12:23:00.979907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.718812   10844 command_runner.go:130] ! E0603 12:23:00.980156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.718812   10844 command_runner.go:130] ! W0603 12:23:00.980358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.718960   10844 command_runner.go:130] ! E0603 12:23:00.980393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.719009   10844 command_runner.go:130] ! W0603 12:23:00.980479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.980561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.980991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.981244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.981380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.981529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.981800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.981883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.981956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.982200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.982090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.982650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:00.982102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:00.982927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.795531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.795655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.838399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.838478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.861969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.862351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! W0603 12:23:01.873392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719058   10844 command_runner.go:130] ! E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 05:47:16.719612   10844 command_runner.go:130] ! W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.719612   10844 command_runner.go:130] ! E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 05:47:16.719612   10844 command_runner.go:130] ! W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719686   10844 command_runner.go:130] ! E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719686   10844 command_runner.go:130] ! W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.719763   10844 command_runner.go:130] ! E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 05:47:16.719763   10844 command_runner.go:130] ! W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719842   10844 command_runner.go:130] ! E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719842   10844 command_runner.go:130] ! W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 05:47:16.719902   10844 command_runner.go:130] ! W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.720057   10844 command_runner.go:130] ! E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 05:47:16.720107   10844 command_runner.go:130] ! W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.720107   10844 command_runner.go:130] ! E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 05:47:16.720107   10844 command_runner.go:130] ! W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.720180   10844 command_runner.go:130] ! E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:16.720180   10844 command_runner.go:130] ! W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.720244   10844 command_runner.go:130] ! E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 05:47:16.720244   10844 command_runner.go:130] ! W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.720315   10844 command_runner.go:130] ! E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 05:47:16.720315   10844 command_runner.go:130] ! I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:16.720315   10844 command_runner.go:130] ! E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 05:47:16.731704   10844 logs.go:123] Gathering logs for kube-proxy [09616a16042d] ...
	I0603 05:47:16.731704   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09616a16042d"
	I0603 05:47:16.773806   10844 command_runner.go:130] ! I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:16.774624   10844 command_runner.go:130] ! I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 05:47:16.774624   10844 command_runner.go:130] ! I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:16.774680   10844 command_runner.go:130] ! I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:16.774680   10844 command_runner.go:130] ! I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:16.774680   10844 command_runner.go:130] ! I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:16.774763   10844 command_runner.go:130] ! I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:16.774763   10844 command_runner.go:130] ! I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.774825   10844 command_runner.go:130] ! I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 05:47:16.774910   10844 command_runner.go:130] ! I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 05:47:16.774954   10844 command_runner.go:130] ! I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:16.775028   10844 command_runner.go:130] ! I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:16.775028   10844 command_runner.go:130] ! I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:16.775028   10844 command_runner.go:130] ! I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:16.778045   10844 logs.go:123] Gathering logs for kube-proxy [ad08c7b8f3af] ...
	I0603 05:47:16.778045   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad08c7b8f3af"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 05:47:16.815377   10844 command_runner.go:130] ! I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 05:47:16.816511   10844 command_runner.go:130] ! I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 05:47:16.816579   10844 command_runner.go:130] ! I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	I0603 05:47:16.822584   10844 logs.go:123] Gathering logs for kubelet ...
	I0603 05:47:16.822638   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825136    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.825207    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: I0603 12:45:50.826137    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 kubelet[1385]: E0603 12:45:50.827240    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.854219   10844 command_runner.go:130] > Jun 03 12:45:50 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552269    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552416    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: I0603 12:45:51.552941    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 kubelet[1442]: E0603 12:45:51.553003    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:51 multinode-316400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711442    1519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711544    1519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.711817    1519 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.716147    1519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.748912    1519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.771826    1519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.772049    1519 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 05:47:16.854927   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773407    1519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 05:47:16.855591   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.773557    1519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-316400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774457    1519 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.774557    1519 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.775200    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778084    1519 kubelet.go:400] "Attempting to sync node with API server"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778299    1519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.778455    1519 kubelet.go:312] "Adding apiserver pod source"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.782054    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.782432    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.785611    1519 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.790640    1519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.793090    1519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.794605    1519 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.796156    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.796271    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.797002    1519 server.go:1264] "Started kubelet"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.798266    1519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.801861    1519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.802334    1519 server.go:455] "Adding debug handlers to kubelet server"
	I0603 05:47:16.855652   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.803283    1519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.803500    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.818343    1519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.844408    1519 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.846586    1519 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859495    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="200ms"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.859675    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.859801    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860191    1519 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860329    1519 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.860344    1519 factory.go:221] Registration of the systemd container factory successfully
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898244    1519 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898480    1519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.898596    1519 state_mem.go:36] "Initialized new in-memory state store"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899321    1519 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899417    1519 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.899447    1519 policy_none.go:49] "None policy: Start"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.900544    1519 reconciler.go:26] "Reconciler: start to sync state"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907485    1519 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.907527    1519 state_mem.go:35] "Initializing new in-memory state store"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.908237    1519 state_mem.go:75] "Updated machine memory state"
	I0603 05:47:16.856402   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.913835    1519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 05:47:16.857019   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914035    1519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 05:47:16.857019   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.914854    1519 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 05:47:16.857019   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.921784    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.927630    1519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-316400\" not found"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932254    1519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932281    1519 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.932300    1519 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.935092    1519 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: W0603 12:45:54.940949    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.941116    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: I0603 12:45:54.948643    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.949875    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]: E0603 12:45:54.957193    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.035350    1519 topology_manager.go:215] "Topology Admit Handler" podUID="29e4294fa112526de08d5737962f6330" podNamespace="kube-system" podName="kube-apiserver-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.036439    1519 topology_manager.go:215] "Topology Admit Handler" podUID="53c1415900cfae2b2544e26360f8c9e2" podNamespace="kube-system" podName="kube-controller-manager-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.037279    1519 topology_manager.go:215] "Topology Admit Handler" podUID="392dbbcc275890dd2b6fadbfc5aaee27" podNamespace="kube-system" podName="kube-scheduler-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.040156    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a77247d80dfdd462b8863b85ab8ad4bb" podNamespace="kube-system" podName="etcd-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041355    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf22fe66615444841b76ea00858c2d191b3808baedd9bc080bc40a07e173120c"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041413    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b8b906c7ece4b6d777a07a0cb2203eff03efdfae414479586ee928dfd93a0f"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041426    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab8fbb688dfe331c1f384bb60f2e3169f09a613ebbfb33a15f502f1d3e605b1"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.041486    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f0d5d979f878809d344310dbe1eff0bad9db5a6522da02c87fecce5e5aeee0"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.047918    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b4a69fc5b72d73e1786ba4b220631a73bd21f4e58f7cb9408fbf75f3f6ae6e"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.063032    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="400ms"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.063221    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24225992b633386b5c5d178b106212b6c942a19a6f436ce076aaa359c121477"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.079235    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87702037798e93cc1060d5befe77a7f660d0ce5c836be9ca173cc4d1789327d4"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.093321    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956a24c17e7023829e09aba40a222a457a14deb99874053b42496e160b5dc9d"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.105962    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857133   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106038    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-certs\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106081    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-ca-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106112    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-ca-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106140    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-k8s-certs\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106216    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/392dbbcc275890dd2b6fadbfc5aaee27-kubeconfig\") pod \"kube-scheduler-multinode-316400\" (UID: \"392dbbcc275890dd2b6fadbfc5aaee27\") " pod="kube-system/kube-scheduler-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106252    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a77247d80dfdd462b8863b85ab8ad4bb-etcd-data\") pod \"etcd-multinode-316400\" (UID: \"a77247d80dfdd462b8863b85ab8ad4bb\") " pod="kube-system/etcd-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106274    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-k8s-certs\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106301    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29e4294fa112526de08d5737962f6330-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-316400\" (UID: \"29e4294fa112526de08d5737962f6330\") " pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106335    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-flexvolume-dir\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.106354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c1415900cfae2b2544e26360f8c9e2-kubeconfig\") pod \"kube-controller-manager-multinode-316400\" (UID: \"53c1415900cfae2b2544e26360f8c9e2\") " pod="kube-system/kube-controller-manager-multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.108700    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f366fa802e02ad1c75f843781b4cf6b39c2e71e08ec4fb65114ebe9cbf4901"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.152637    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.154286    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.473402    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="800ms"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: I0603 12:45:55.556260    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.558340    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.691400    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.691528    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-316400&limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: W0603 12:45:55.943127    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:55 multinode-316400 kubelet[1519]: E0603 12:45:55.943173    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.857996   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.142169    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2e6f87def8ec65b487278aa755fad937c4ca80395b1353b9774ec940401ea"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.150065    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942fe3bc13ce6ffca043bea71cd86e77d36f0312701537c71338d38cba386b47"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.247409    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.247587    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: W0603 12:45:56.250356    1519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.250413    1519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.95.88:8443: connect: connection refused
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.274392    1519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-316400?timeout=10s\": dial tcp 172.17.95.88:8443: connect: connection refused" interval="1.6s"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: I0603 12:45:56.360120    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.361915    1519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.95.88:8443: connect: connection refused" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:56 multinode-316400 kubelet[1519]: E0603 12:45:56.861220    1519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.95.88:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-316400.17d57f421a4486bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-316400,UID:multinode-316400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-316400,},FirstTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,LastTimestamp:2024-06-03 12:45:54.796979901 +0000 UTC m=+0.190595347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-316
400,}"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:45:57 multinode-316400 kubelet[1519]: I0603 12:45:57.964214    1519 kubelet_node_status.go:73] "Attempting to register node" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604617    1519 kubelet_node_status.go:112] "Node was previously registered" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.604775    1519 kubelet_node_status.go:76] "Successfully registered node" node="multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.606910    1519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.607771    1519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.608805    1519 setters.go:580] "Node became not ready" node="multinode-316400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T12:46:00Z","lastTransitionTime":"2024-06-03T12:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.691329    1519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-316400\" already exists" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.791033    1519 apiserver.go:52] "Watching apiserver"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798319    1519 topology_manager.go:215] "Topology Admit Handler" podUID="a3523f27-9775-4c1f-812f-a667faa1bace" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4hrc6"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.798930    1519 topology_manager.go:215] "Topology Admit Handler" podUID="6815ff24-537b-42f3-b8ee-4c3e13be89f7" podNamespace="kube-system" podName="kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800209    1519 topology_manager.go:215] "Topology Admit Handler" podUID="60c8f253-7e07-4f56-b1f2-e0032ac6a8ce" podNamespace="kube-system" podName="kube-proxy-ks64x"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800471    1519 topology_manager.go:215] "Topology Admit Handler" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa" podNamespace="kube-system" podName="storage-provisioner"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.800813    1519 topology_manager.go:215] "Topology Admit Handler" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39" podNamespace="default" podName="busybox-fc5497c4f-pm79t"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.801153    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.801692    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-316400" podUID="5a3b396d-1240-4c67-b2f5-e5664e068bfe"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.802378    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.833818    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-316400"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.848055    1519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.920366    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-cni-cfg\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923685    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-lib-modules\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.923879    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-xtables-lock\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924084    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6815ff24-537b-42f3-b8ee-4c3e13be89f7-xtables-lock\") pod \"kindnet-4hpsl\" (UID: \"6815ff24-537b-42f3-b8ee-4c3e13be89f7\") " pod="kube-system/kindnet-4hpsl"
	I0603 05:47:16.858998   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924331    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbd73e44-9a7e-4b5f-93e5-d1621c837baa-tmp\") pod \"storage-provisioner\" (UID: \"bbd73e44-9a7e-4b5f-93e5-d1621c837baa\") " pod="kube-system/storage-provisioner"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.924536    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c8f253-7e07-4f56-b1f2-e0032ac6a8ce-lib-modules\") pod \"kube-proxy-ks64x\" (UID: \"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce\") " pod="kube-system/kube-proxy-ks64x"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.924884    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.925133    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.425053064 +0000 UTC m=+6.818668510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.947864    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171c5f025e4267e9949ddac2f1863980" path="/var/lib/kubelet/pods/171c5f025e4267e9949ddac2f1863980/volumes"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.949521    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79ce6c8ebbce53597babbe73b1962c9" path="/var/lib/kubelet/pods/b79ce6c8ebbce53597babbe73b1962c9/volumes"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.959965    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960012    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: E0603 12:46:00.960141    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:01.460099085 +0000 UTC m=+6.853714631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:00 multinode-316400 kubelet[1519]: I0603 12:46:00.984966    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-316400" podStartSLOduration=0.984946212 podStartE2EDuration="984.946212ms" podCreationTimestamp="2024-06-03 12:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:00.911653941 +0000 UTC m=+6.305269487" watchObservedRunningTime="2024-06-03 12:46:00.984946212 +0000 UTC m=+6.378561658"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430112    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.430199    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.430180493 +0000 UTC m=+7.823795939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532174    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532233    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: E0603 12:46:01.532300    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:02.532282929 +0000 UTC m=+7.925898375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:01 multinode-316400 kubelet[1519]: I0603 12:46:01.863329    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fb3e0c2be17fd0baa825713d9ad8be17752ebb27c0c4aa1e0166aa5b3b5c4"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.165874    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb9a5291cc42a783090e13d8314748390c99ef26ac5c263b5f565211b239b7b"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.352473    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8f89dffdc8ec0b02151634c14e24a5ac0395117546f38ea23be29d32e92b91"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.353470    1519 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-316400" podUID="0cdcee20-9dca-4eca-b92f-a7214368dd5e"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: I0603 12:46:02.376913    1519 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-316400"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442116    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.442214    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.442196268 +0000 UTC m=+9.835811814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543119    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543210    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.543279    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:04.543260694 +0000 UTC m=+9.936876140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935003    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.859995   10844 command_runner.go:130] > Jun 03 12:46:02 multinode-316400 kubelet[1519]: E0603 12:46:02.935334    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:03 multinode-316400 kubelet[1519]: I0603 12:46:03.466467    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-316400" podStartSLOduration=1.4664454550000001 podStartE2EDuration="1.466445455s" podCreationTimestamp="2024-06-03 12:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 12:46:03.412988665 +0000 UTC m=+8.806604211" watchObservedRunningTime="2024-06-03 12:46:03.466445455 +0000 UTC m=+8.860061001"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461035    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.461144    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.461126571 +0000 UTC m=+13.854742017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562140    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562216    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.562368    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:08.562318298 +0000 UTC m=+13.955933744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.917749    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935276    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:04 multinode-316400 kubelet[1519]: E0603 12:46:04.935939    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:06 multinode-316400 kubelet[1519]: E0603 12:46:06.935856    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497589    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.497705    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.497687292 +0000 UTC m=+21.891302738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599269    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599402    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.599472    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:16.599454365 +0000 UTC m=+21.993069911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933000    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:08 multinode-316400 kubelet[1519]: E0603 12:46:08.933553    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:09 multinode-316400 kubelet[1519]: E0603 12:46:09.919522    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.933394    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:10 multinode-316400 kubelet[1519]: E0603 12:46:10.934072    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.933530    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:12 multinode-316400 kubelet[1519]: E0603 12:46:12.934829    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.920634    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861011   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.933278    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:14 multinode-316400 kubelet[1519]: E0603 12:46:14.934086    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.577469    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.578411    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.578339881 +0000 UTC m=+37.971955427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.677992    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678127    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.678205    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:46:32.678184952 +0000 UTC m=+38.071800498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933065    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:16 multinode-316400 kubelet[1519]: E0603 12:46:16.933791    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.934362    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:18 multinode-316400 kubelet[1519]: E0603 12:46:18.935128    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:19 multinode-316400 kubelet[1519]: E0603 12:46:19.922666    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.934372    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:20 multinode-316400 kubelet[1519]: E0603 12:46:20.935099    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934047    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:22 multinode-316400 kubelet[1519]: E0603 12:46:22.934767    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.924197    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.933388    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:24 multinode-316400 kubelet[1519]: E0603 12:46:24.934120    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.934350    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:26 multinode-316400 kubelet[1519]: E0603 12:46:26.935369    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.934504    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:28 multinode-316400 kubelet[1519]: E0603 12:46:28.935634    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:29 multinode-316400 kubelet[1519]: E0603 12:46:29.925755    1519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 05:47:16.861995   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.933950    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:30 multinode-316400 kubelet[1519]: E0603 12:46:30.937812    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624555    1519 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.624639    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume podName:a3523f27-9775-4c1f-812f-a667faa1bace nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.624619316 +0000 UTC m=+70.018234762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3523f27-9775-4c1f-812f-a667faa1bace-config-volume") pod "coredns-7db6d8ff4d-4hrc6" (UID: "a3523f27-9775-4c1f-812f-a667faa1bace") : object "kube-system"/"coredns" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726444    1519 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726516    1519 projected.go:200] Error preparing data for projected volume kube-api-access-l2hdj for pod default/busybox-fc5497c4f-pm79t: object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.726576    1519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj podName:5a541beb-e22e-41aa-bb76-5e6e82ac0d39 nodeName:}" failed. No retries permitted until 2024-06-03 12:47:04.726559662 +0000 UTC m=+70.120175108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2hdj" (UniqueName: "kubernetes.io/projected/5a541beb-e22e-41aa-bb76-5e6e82ac0d39-kube-api-access-l2hdj") pod "busybox-fc5497c4f-pm79t" (UID: "5a541beb-e22e-41aa-bb76-5e6e82ac0d39") : object "default"/"kube-root-ca.crt" not registered
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.933519    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4hrc6" podUID="a3523f27-9775-4c1f-812f-a667faa1bace"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:32 multinode-316400 kubelet[1519]: E0603 12:46:32.934365    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-pm79t" podUID="5a541beb-e22e-41aa-bb76-5e6e82ac0d39"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.841289    1519 scope.go:117] "RemoveContainer" containerID="f3d3a474bbe63a5e0e163d5c7d92c13e3e09cac96cc090c7077e648e1f08c5c7"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: I0603 12:46:33.842261    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:33 multinode-316400 kubelet[1519]: E0603 12:46:33.842518    1519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbd73e44-9a7e-4b5f-93e5-d1621c837baa)\"" pod="kube-system/storage-provisioner" podUID="bbd73e44-9a7e-4b5f-93e5-d1621c837baa"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	I0603 05:47:16.863003   10844 command_runner.go:130] > Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	I0603 05:47:16.909090   10844 logs.go:123] Gathering logs for etcd [ef3c01484867] ...
	I0603 05:47:16.909090   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3c01484867"
	I0603 05:47:16.947821   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.861568Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.863054Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.95.88:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.95.88:2380","--initial-cluster=multinode-316400=https://172.17.95.88:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.95.88:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.95.88:2380","--name=multinode-316400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-ref
resh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.86357Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:56.864546Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.866457Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.95.88:2380"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.867148Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.884169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.885995Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-316400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cl
uster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.912835Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"25.475134ms"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.947133Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.990656Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","commit-index":1995}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=()"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became follower at term 2"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:56.991421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2227694153984668 [peers: [], term: 2, commit: 1995, applied: 0, lastindex: 1995, lastterm: 2]"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T12:45:57.005826Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.01104Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1364}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.018364Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1726}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.030883Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042399Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2227694153984668","timeout":"7s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.042946Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2227694153984668"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.043072Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2227694153984668","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.046821Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047797Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	I0603 05:47:16.948844   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	I0603 05:47:16.949922   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 05:47:16.950025   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 05:47:16.950126   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 05:47:16.950126   10844 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	I0603 05:47:16.956883   10844 logs.go:123] Gathering logs for coredns [4241e2ff2dfe] ...
	I0603 05:47:16.956883   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4241e2ff2dfe"
	I0603 05:47:16.988242   10844 command_runner.go:130] > .:53
	I0603 05:47:16.989206   10844 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	I0603 05:47:16.989252   10844 command_runner.go:130] > CoreDNS-1.11.1
	I0603 05:47:16.989252   10844 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 05:47:16.989252   10844 command_runner.go:130] > [INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	I0603 05:47:16.989538   10844 logs.go:123] Gathering logs for container status ...
	I0603 05:47:16.989647   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 05:47:17.056163   10844 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 05:47:17.056213   10844 command_runner.go:130] > c57e529e14789       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	I0603 05:47:17.056278   10844 command_runner.go:130] > 4241e2ff2dfe8       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:17.056278   10844 command_runner.go:130] > e1365acc9d8f5       6e38f40d628db                                                                                         33 seconds ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:17.056325   10844 command_runner.go:130] > 3a08a76e2a79b       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	I0603 05:47:17.056325   10844 command_runner.go:130] > eeba3616d7005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	I0603 05:47:17.056325   10844 command_runner.go:130] > 09616a16042d3       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	I0603 05:47:17.056391   10844 command_runner.go:130] > a9b10f4d479ac       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	I0603 05:47:17.056431   10844 command_runner.go:130] > ef3c014848675       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	I0603 05:47:17.056475   10844 command_runner.go:130] > 334bb0174b55e       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	I0603 05:47:17.056517   10844 command_runner.go:130] > cbaa09a85a643       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	I0603 05:47:17.056588   10844 command_runner.go:130] > ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	I0603 05:47:17.056588   10844 command_runner.go:130] > 8280b39046781       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	I0603 05:47:17.056627   10844 command_runner.go:130] > a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	I0603 05:47:17.056627   10844 command_runner.go:130] > ad08c7b8f3aff       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	I0603 05:47:17.056627   10844 command_runner.go:130] > f39be6db7a1f8       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	I0603 05:47:17.056627   10844 command_runner.go:130] > 3d7dc29a57912       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	I0603 05:47:17.058787   10844 logs.go:123] Gathering logs for kube-scheduler [334bb0174b55] ...
	I0603 05:47:17.059377   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 334bb0174b55"
	I0603 05:47:17.088120   10844 command_runner.go:130] ! I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:17.088578   10844 command_runner.go:130] ! W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 05:47:17.088620   10844 command_runner.go:130] ! W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 05:47:17.088666   10844 command_runner.go:130] ! W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 05:47:17.088731   10844 command_runner.go:130] ! W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:17.088731   10844 command_runner.go:130] ! I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 05:47:17.091051   10844 logs.go:123] Gathering logs for kube-controller-manager [cbaa09a85a64] ...
	I0603 05:47:17.091128   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa09a85a64"
	I0603 05:47:17.123816   10844 command_runner.go:130] ! I0603 12:45:57.870752       1 serving.go:380] Generated self-signed cert in-memory
	I0603 05:47:17.124610   10844 command_runner.go:130] ! I0603 12:45:58.526588       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 05:47:17.124610   10844 command_runner.go:130] ! I0603 12:45:58.526702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 05:47:17.124739   10844 command_runner.go:130] ! I0603 12:45:58.533907       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 05:47:17.124879   10844 command_runner.go:130] ! I0603 12:45:58.534542       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 05:47:17.125087   10844 command_runner.go:130] ! I0603 12:45:58.535842       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 05:47:17.125702   10844 command_runner.go:130] ! I0603 12:45:58.536233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 05:47:17.126041   10844 command_runner.go:130] ! I0603 12:46:02.398949       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 05:47:17.126114   10844 command_runner.go:130] ! I0603 12:46:02.399900       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 05:47:17.126215   10844 command_runner.go:130] ! I0603 12:46:02.435010       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 05:47:17.126282   10844 command_runner.go:130] ! I0603 12:46:02.435043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:17.126537   10844 command_runner.go:130] ! I0603 12:46:02.435076       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 05:47:17.126615   10844 command_runner.go:130] ! I0603 12:46:02.435752       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 05:47:17.126828   10844 command_runner.go:130] ! I0603 12:46:02.494257       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 05:47:17.126942   10844 command_runner.go:130] ! I0603 12:46:02.494484       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 05:47:17.126942   10844 command_runner.go:130] ! I0603 12:46:02.501595       1 shared_informer.go:320] Caches are synced for tokens
	I0603 05:47:17.126997   10844 command_runner.go:130] ! E0603 12:46:02.503053       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 05:47:17.127173   10844 command_runner.go:130] ! I0603 12:46:02.503101       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 05:47:17.127212   10844 command_runner.go:130] ! I0603 12:46:02.506314       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 05:47:17.127447   10844 command_runner.go:130] ! I0603 12:46:02.511488       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.511970       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.516592       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.520190       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.521481       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.521500       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.522419       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.522531       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.522539       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.527263       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.527284       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.528477       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 05:47:17.127483   10844 command_runner.go:130] ! I0603 12:46:02.528534       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 05:47:17.128043   10844 command_runner.go:130] ! I0603 12:46:02.528980       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 05:47:17.128043   10844 command_runner.go:130] ! I0603 12:46:02.529023       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 05:47:17.128043   10844 command_runner.go:130] ! I0603 12:46:02.529029       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.532164       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.532658       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.532787       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.537982       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.538156       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.540497       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 05:47:17.128105   10844 command_runner.go:130] ! I0603 12:46:02.545135       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:17.128958   10844 command_runner.go:130] ! I0603 12:46:02.545508       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 05:47:17.129003   10844 command_runner.go:130] ! I0603 12:46:02.546501       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.548466       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.551407       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.551542       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552105       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552249       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552280       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.552956       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.564031       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.564743       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.565277       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.565424       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.571139       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.571233       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.572399       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.572466       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.573181       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.573205       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.574887       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 05:47:17.129032   10844 command_runner.go:130] ! I0603 12:46:02.582200       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 05:47:17.129591   10844 command_runner.go:130] ! I0603 12:46:02.582364       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 05:47:17.129591   10844 command_runner.go:130] ! I0603 12:46:02.582373       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 05:47:17.129591   10844 command_runner.go:130] ! I0603 12:46:02.588602       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:02.591240       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:12.612297       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:12.612483       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 05:47:17.129705   10844 command_runner.go:130] ! I0603 12:46:12.613381       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 05:47:17.129798   10844 command_runner.go:130] ! I0603 12:46:12.623612       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 05:47:17.129798   10844 command_runner.go:130] ! I0603 12:46:12.628478       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 05:47:17.129798   10844 command_runner.go:130] ! I0603 12:46:12.628951       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 05:47:17.129845   10844 command_runner.go:130] ! I0603 12:46:12.629235       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 05:47:17.129845   10844 command_runner.go:130] ! I0603 12:46:12.652905       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 05:47:17.129888   10844 command_runner.go:130] ! I0603 12:46:12.652988       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 05:47:17.129888   10844 command_runner.go:130] ! I0603 12:46:12.653246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 05:47:17.129926   10844 command_runner.go:130] ! I0603 12:46:12.673155       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 05:47:17.129945   10844 command_runner.go:130] ! I0603 12:46:12.673199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 05:47:17.129945   10844 command_runner.go:130] ! I0603 12:46:12.673508       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 05:47:17.130051   10844 command_runner.go:130] ! I0603 12:46:12.673789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 05:47:17.130119   10844 command_runner.go:130] ! I0603 12:46:12.674494       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 05:47:17.130119   10844 command_runner.go:130] ! I0603 12:46:12.674611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 05:47:17.130196   10844 command_runner.go:130] ! I0603 12:46:12.674812       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 05:47:17.130239   10844 command_runner.go:130] ! I0603 12:46:12.675099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 05:47:17.130449   10844 command_runner.go:130] ! I0603 12:46:12.675266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 05:47:17.130500   10844 command_runner.go:130] ! I0603 12:46:12.675397       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 05:47:17.131143   10844 command_runner.go:130] ! I0603 12:46:12.675422       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 05:47:17.131448   10844 command_runner.go:130] ! I0603 12:46:12.675675       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 05:47:17.131448   10844 command_runner.go:130] ! I0603 12:46:12.675833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 05:47:17.131930   10844 command_runner.go:130] ! I0603 12:46:12.675905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 05:47:17.132870   10844 command_runner.go:130] ! I0603 12:46:12.676018       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 05:47:17.133365   10844 command_runner.go:130] ! I0603 12:46:12.676230       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 05:47:17.133424   10844 command_runner.go:130] ! I0603 12:46:12.676428       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676474       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676746       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676879       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.676991       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.677057       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 05:47:17.133461   10844 command_runner.go:130] ! I0603 12:46:12.677159       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:17.133765   10844 command_runner.go:130] ! I0603 12:46:12.677261       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 05:47:17.133765   10844 command_runner.go:130] ! I0603 12:46:12.679809       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 05:47:17.133824   10844 command_runner.go:130] ! I0603 12:46:12.680265       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.680400       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.696376       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.697035       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.697121       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.699870       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.700035       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.700365       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.707376       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.708196       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.708250       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.715601       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.716125       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.716429       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.725280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.725365       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.726123       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.734528       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.734935       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.735117       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.737491       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.737773       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.737858       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743270       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743591       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743640       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.743648       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748185       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748266       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748498       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 05:47:17.133860   10844 command_runner.go:130] ! I0603 12:46:12.748532       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.748553       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749140       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749625       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 05:47:17.134402   10844 command_runner.go:130] ! I0603 12:46:12.749663       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.749683       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.749897       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.750105       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.750568       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.753301       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 05:47:17.134539   10844 command_runner.go:130] ! I0603 12:46:12.753662       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.753804       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.754382       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.754576       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.757083       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.757524       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.758174       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 05:47:17.134687   10844 command_runner.go:130] ! I0603 12:46:12.760247       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.760686       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.760938       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.772698       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.772922       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 05:47:17.134824   10844 command_runner.go:130] ! I0603 12:46:12.774148       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 05:47:17.134824   10844 command_runner.go:130] ! E0603 12:46:12.775996       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 05:47:17.134943   10844 command_runner.go:130] ! I0603 12:46:12.776034       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 05:47:17.134943   10844 command_runner.go:130] ! I0603 12:46:12.779294       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 05:47:17.135005   10844 command_runner.go:130] ! I0603 12:46:12.779452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 05:47:17.135005   10844 command_runner.go:130] ! I0603 12:46:12.780268       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 05:47:17.135066   10844 command_runner.go:130] ! I0603 12:46:12.783043       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 05:47:17.135066   10844 command_runner.go:130] ! I0603 12:46:12.783634       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 05:47:17.135108   10844 command_runner.go:130] ! I0603 12:46:12.783847       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 05:47:17.135166   10844 command_runner.go:130] ! I0603 12:46:12.783962       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 05:47:17.135166   10844 command_runner.go:130] ! I0603 12:46:12.792655       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.801373       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.817303       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.821609       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 05:47:17.135219   10844 command_runner.go:130] ! I0603 12:46:12.829238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.135617   10844 command_runner.go:130] ! I0603 12:46:12.832397       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400\" does not exist"
	I0603 05:47:17.136197   10844 command_runner.go:130] ! I0603 12:46:12.832809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.136197   10844 command_runner.go:130] ! I0603 12:46:12.833093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 05:47:17.136738   10844 command_runner.go:130] ! I0603 12:46:12.833264       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 05:47:17.136876   10844 command_runner.go:130] ! I0603 12:46:12.833561       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 05:47:17.136953   10844 command_runner.go:130] ! I0603 12:46:12.833878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.136953   10844 command_runner.go:130] ! I0603 12:46:12.835226       1 shared_informer.go:320] Caches are synced for service account
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.840542       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.846790       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.849319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.849497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.851129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.851147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.852109       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.854406       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.854923       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.867259       1 shared_informer.go:320] Caches are synced for expand
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.873525       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.874696       1 shared_informer.go:320] Caches are synced for HPA
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.876061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.880612       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.880650       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.884270       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.896673       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.897786       1 shared_informer.go:320] Caches are synced for namespace
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.909588       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.922202       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.923485       1 shared_informer.go:320] Caches are synced for TTL
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.923685       1 shared_informer.go:320] Caches are synced for node
	I0603 05:47:17.137907   10844 command_runner.go:130] ! I0603 12:46:12.924158       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 05:47:17.138539   10844 command_runner.go:130] ! I0603 12:46:12.924516       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.924851       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.924952       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.928113       1 shared_informer.go:320] Caches are synced for GC
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.929667       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.959523       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:12.963250       1 shared_informer.go:320] Caches are synced for deployment
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.029808       1 shared_informer.go:320] Caches are synced for taint
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.030293       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.038277       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.044424       1 shared_informer.go:320] Caches are synced for disruption
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.064984       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.077763       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.083477       1 shared_informer.go:320] Caches are synced for job
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.093778       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 05:47:17.138784   10844 command_runner.go:130] ! I0603 12:46:13.100897       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 05:47:17.139484   10844 command_runner.go:130] ! I0603 12:46:13.133780       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 05:47:17.139484   10844 command_runner.go:130] ! I0603 12:46:13.164944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.004317ms"
	I0603 05:47:17.139744   10844 command_runner.go:130] ! I0603 12:46:13.168328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.004µs"
	I0603 05:47:17.139817   10844 command_runner.go:130] ! I0603 12:46:13.172600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="212.304157ms"
	I0603 05:47:17.139851   10844 command_runner.go:130] ! I0603 12:46:13.173022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.001µs"
	I0603 05:47:17.139851   10844 command_runner.go:130] ! I0603 12:46:13.502035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:17.139851   10844 command_runner.go:130] ! I0603 12:46:13.535943       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 05:47:17.139881   10844 command_runner.go:130] ! I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 05:47:17.139881   10844 command_runner.go:130] ! I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 05:47:17.139881   10844 command_runner.go:130] ! I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 05:47:17.139939   10844 command_runner.go:130] ! I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 05:47:17.139973   10844 command_runner.go:130] ! I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 05:47:17.140012   10844 command_runner.go:130] ! I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 05:47:17.140012   10844 command_runner.go:130] ! I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 05:47:17.140041   10844 command_runner.go:130] ! I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 05:47:17.140079   10844 command_runner.go:130] ! I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 05:47:17.158490   10844 logs.go:123] Gathering logs for kindnet [a00a9dc2a937] ...
	I0603 05:47:17.158490   10844 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00a9dc2a937"
	I0603 05:47:17.191117   10844 command_runner.go:130] ! I0603 12:32:18.810917       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:18.811413       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:18.811451       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:28.826592       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191520   10844 command_runner.go:130] ! I0603 12:32:28.826645       1 main.go:227] handling current node
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.826658       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.826665       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.827203       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191593   10844 command_runner.go:130] ! I0603 12:32:28.827288       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840141       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840209       1 main.go:227] handling current node
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840223       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191657   10844 command_runner.go:130] ! I0603 12:32:38.840230       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:38.840630       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:38.840646       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850171       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850276       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850292       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850299       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850729       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:48.850876       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.856606       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.857034       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.857296       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.857510       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.858637       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:32:58.858677       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864801       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864826       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864838       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.864844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.865310       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:08.865474       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872391       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872568       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872599       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872624       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872804       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:18.872959       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886324       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886350       1 main.go:227] handling current node
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886362       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886368       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886918       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:28.886985       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:38.893626       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.191786   10844 command_runner.go:130] ! I0603 12:33:38.893899       1 main.go:227] handling current node
	I0603 05:47:17.192338   10844 command_runner.go:130] ! I0603 12:33:38.893916       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192338   10844 command_runner.go:130] ! I0603 12:33:38.894181       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192338   10844 command_runner.go:130] ! I0603 12:33:38.894556       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:38.894647       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910837       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910878       1 main.go:227] handling current node
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910891       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.910896       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.911015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192398   10844 command_runner.go:130] ! I0603 12:33:48.911041       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192482   10844 command_runner.go:130] ! I0603 12:33:58.926167       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192482   10844 command_runner.go:130] ! I0603 12:33:58.926268       1 main.go:227] handling current node
	I0603 05:47:17.192513   10844 command_runner.go:130] ! I0603 12:33:58.926284       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192534   10844 command_runner.go:130] ! I0603 12:33:58.926291       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192534   10844 command_runner.go:130] ! I0603 12:33:58.927007       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192569   10844 command_runner.go:130] ! I0603 12:33:58.927131       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192569   10844 command_runner.go:130] ! I0603 12:34:08.937101       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192569   10844 command_runner.go:130] ! I0603 12:34:08.937131       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937150       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937284       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:08.937292       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943292       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943378       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943532       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:18.943590       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950687       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950853       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950870       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.950878       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.951068       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:28.951084       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.965710       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967355       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967377       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967388       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967555       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:38.967566       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.975988       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976117       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976134       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976142       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976817       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:48.976852       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.991312       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.991846       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.991984       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.992011       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.992262       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:34:58.992331       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999119       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999230       1 main.go:227] handling current node
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999369       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999483       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999604       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.192627   10844 command_runner.go:130] ! I0603 12:35:08.999616       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193171   10844 command_runner.go:130] ! I0603 12:35:19.007514       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193171   10844 command_runner.go:130] ! I0603 12:35:19.007620       1 main.go:227] handling current node
	I0603 05:47:17.193171   10844 command_runner.go:130] ! I0603 12:35:19.007635       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:19.007642       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:19.007957       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:19.007986       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193221   10844 command_runner.go:130] ! I0603 12:35:29.013983       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193293   10844 command_runner.go:130] ! I0603 12:35:29.014066       1 main.go:227] handling current node
	I0603 05:47:17.193293   10844 command_runner.go:130] ! I0603 12:35:29.014081       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:29.014088       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:29.014429       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:29.014444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193337   10844 command_runner.go:130] ! I0603 12:35:39.025261       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193397   10844 command_runner.go:130] ! I0603 12:35:39.025288       1 main.go:227] handling current node
	I0603 05:47:17.193397   10844 command_runner.go:130] ! I0603 12:35:39.025300       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193397   10844 command_runner.go:130] ! I0603 12:35:39.025306       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193437   10844 command_runner.go:130] ! I0603 12:35:39.025682       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193437   10844 command_runner.go:130] ! I0603 12:35:39.025828       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193485   10844 command_runner.go:130] ! I0603 12:35:49.038248       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193485   10844 command_runner.go:130] ! I0603 12:35:49.039013       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.039143       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.039662       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.040380       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:49.040438       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052205       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052297       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052328       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052410       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052577       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:35:59.052607       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059926       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059974       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059988       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.059995       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.060515       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:09.060532       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.069521       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.069928       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.070204       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.070309       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.070978       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:19.071168       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084376       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084614       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084689       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.084804       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.085015       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:29.085100       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098298       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098419       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098435       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098444       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.098942       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:39.099083       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109724       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109872       1 main.go:227] handling current node
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.109894       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.110382       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.193527   10844 command_runner.go:130] ! I0603 12:36:49.110466       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.116904       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117061       1 main.go:227] handling current node
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117150       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117281       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194115   10844 command_runner.go:130] ! I0603 12:36:59.117621       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:36:59.117713       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.133187       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.133597       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.133807       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.134149       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.134720       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:09.134902       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141246       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141257       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141263       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141386       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:19.141456       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151018       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151126       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151147       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151156       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.151810       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:29.152019       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165415       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165510       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165524       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.165530       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.166173       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:39.166270       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181247       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181371       1 main.go:227] handling current node
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181387       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194212   10844 command_runner.go:130] ! I0603 12:37:49.181412       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:49.181852       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:49.182176       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189418       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189528       1 main.go:227] handling current node
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189544       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.189552       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.190394       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:37:59.190480       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197274       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197415       1 main.go:227] handling current node
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197432       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197440       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194753   10844 command_runner.go:130] ! I0603 12:38:09.197851       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:09.197933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204632       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204793       1 main.go:227] handling current node
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204826       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.204835       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.194944   10844 command_runner.go:130] ! I0603 12:38:19.205144       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:19.205251       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:29.213406       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:29.213503       1 main.go:227] handling current node
	I0603 05:47:17.195028   10844 command_runner.go:130] ! I0603 12:38:29.213518       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:29.213524       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:29.213644       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:29.213655       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:39.229128       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195106   10844 command_runner.go:130] ! I0603 12:38:39.229187       1 main.go:227] handling current node
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229199       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229205       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229332       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:39.229344       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195183   10844 command_runner.go:130] ! I0603 12:38:49.245014       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245069       1 main.go:227] handling current node
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245084       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245091       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245355       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195259   10844 command_runner.go:130] ! I0603 12:38:49.245382       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252267       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252359       1 main.go:227] handling current node
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252371       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.252376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.260367       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195336   10844 command_runner.go:130] ! I0603 12:38:59.260444       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270366       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270476       1 main.go:227] handling current node
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270490       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195414   10844 command_runner.go:130] ! I0603 12:39:09.270544       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:09.270869       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:09.271060       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277515       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277615       1 main.go:227] handling current node
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277631       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195491   10844 command_runner.go:130] ! I0603 12:39:19.277638       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195736   10844 command_runner.go:130] ! I0603 12:39:19.278259       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:19.278516       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287007       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287102       1 main.go:227] handling current node
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287117       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287124       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287246       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195783   10844 command_runner.go:130] ! I0603 12:39:29.287329       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293747       1 main.go:227] handling current node
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293802       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.293812       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.294185       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195860   10844 command_runner.go:130] ! I0603 12:39:39.294225       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304527       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304629       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304643       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304651       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.304863       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:49.305107       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314751       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314846       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314860       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314866       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.314992       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:39:59.315004       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321649       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321868       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321887       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.321895       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.322451       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:09.322470       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336642       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336845       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336864       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.336872       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.337002       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:19.337011       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350352       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350468       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350484       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350493       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.350956       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:29.351085       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366296       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366357       1 main.go:227] handling current node
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366370       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366376       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366518       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:39.366548       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.195998   10844 command_runner.go:130] ! I0603 12:40:49.371036       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371174       1 main.go:227] handling current node
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371189       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371218       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371340       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:49.371368       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196603   10844 command_runner.go:130] ! I0603 12:40:59.386603       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196715   10844 command_runner.go:130] ! I0603 12:40:59.387024       1 main.go:227] handling current node
	I0603 05:47:17.196715   10844 command_runner.go:130] ! I0603 12:40:59.387122       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196759   10844 command_runner.go:130] ! I0603 12:40:59.387140       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196759   10844 command_runner.go:130] ! I0603 12:40:59.387625       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:40:59.387909       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401524       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401658       1 main.go:227] handling current node
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401746       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196794   10844 command_runner.go:130] ! I0603 12:41:09.401844       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:09.402106       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:09.402238       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:19.408360       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:19.408404       1 main.go:227] handling current node
	I0603 05:47:17.196878   10844 command_runner.go:130] ! I0603 12:41:19.408417       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:19.408423       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:19.408530       1 main.go:223] Handling node with IPs: map[172.17.93.131:{}]
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:19.408541       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.2.0/24] 
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:29.414703       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.196956   10844 command_runner.go:130] ! I0603 12:41:29.414865       1 main.go:227] handling current node
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.414881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.414889       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.415393       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.415619       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197069   10844 command_runner.go:130] ! I0603 12:41:29.415702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.17.87.60 Flags: [] Table: 0} 
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426331       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426441       1 main.go:227] handling current node
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426455       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426462       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197141   10844 command_runner.go:130] ! I0603 12:41:39.426731       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:39.426795       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436618       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436724       1 main.go:227] handling current node
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436739       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197213   10844 command_runner.go:130] ! I0603 12:41:49.436745       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:49.437162       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:49.437250       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449218       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449377       1 main.go:227] handling current node
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449393       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197309   10844 command_runner.go:130] ! I0603 12:41:59.449400       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:41:59.449801       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:41:59.449916       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:42:09.464583       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:42:09.464690       1 main.go:227] handling current node
	I0603 05:47:17.197381   10844 command_runner.go:130] ! I0603 12:42:09.464705       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:09.464713       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:09.465435       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:09.465537       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197494   10844 command_runner.go:130] ! I0603 12:42:19.473928       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474029       1 main.go:227] handling current node
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474044       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474052       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474454       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197574   10844 command_runner.go:130] ! I0603 12:42:19.474552       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480280       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480469       1 main.go:227] handling current node
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480606       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.480686       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197649   10844 command_runner.go:130] ! I0603 12:42:29.481023       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:29.481213       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492462       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492634       1 main.go:227] handling current node
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492669       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197723   10844 command_runner.go:130] ! I0603 12:42:39.492711       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197795   10844 command_runner.go:130] ! I0603 12:42:39.492930       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197795   10844 command_runner.go:130] ! I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.197869   10844 command_runner.go:130] ! I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.197983   10844 command_runner.go:130] ! I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.198012   10844 command_runner.go:130] ! I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.198012   10844 command_runner.go:130] ! I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 05:47:17.198037   10844 command_runner.go:130] ! I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 05:47:19.717891   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:47:19.717891   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.717891   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.717891   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.725244   10844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 05:47:19.725244   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Audit-Id: 590bb11a-8aa1-4a7d-a20e-40318993805e
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.725244   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.725244   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.725244   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.726297   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0603 05:47:19.730172   10844 system_pods.go:59] 12 kube-system pods found
	I0603 05:47:19.730172   10844 system_pods.go:61] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "etcd-multinode-316400" [8509d96a-4449-4656-8237-d194d2980506] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kindnet-2g66r" [3e88e85f-e61e-427f-944a-97b0ba90d219] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kindnet-789v5" [d3147209-4266-4963-a4a6-05a024412c7b] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-apiserver-multinode-316400" [1c07a75f-fb00-4529-a699-378974ce494b] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-proxy-dl97g" [78665ab7-c6dd-4381-8b29-75df4d31eff1] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-proxy-z26hc" [983da576-c697-4bdd-8908-93ec5b571787] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running
	I0603 05:47:19.730172   10844 system_pods.go:61] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running
	I0603 05:47:19.730172   10844 system_pods.go:74] duration metric: took 3.7929055s to wait for pod list to return data ...
	I0603 05:47:19.730172   10844 default_sa.go:34] waiting for default service account to be created ...
	I0603 05:47:19.731186   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/default/serviceaccounts
	I0603 05:47:19.731186   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.731186   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.731186   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.734358   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:19.734358   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Audit-Id: 16f2fd83-5fb5-428e-9796-be58f6e6c124
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.734358   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.734358   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.734358   10844 round_trippers.go:580]     Content-Length: 262
	I0603 05:47:19.734358   10844 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"995f775d-e30c-4872-957a-b91ade4bf666","resourceVersion":"318","creationTimestamp":"2024-06-03T12:23:18Z"}}]}
	I0603 05:47:19.734358   10844 default_sa.go:45] found service account: "default"
	I0603 05:47:19.734358   10844 default_sa.go:55] duration metric: took 4.1865ms for default service account to be created ...
	I0603 05:47:19.734358   10844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 05:47:19.734358   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:47:19.734358   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.734358   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.734358   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.740471   10844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 05:47:19.740660   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Audit-Id: ef05b071-8ada-4ff8-8a77-1135879cf8cc
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.740660   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.740660   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.740660   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.741997   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86494 chars]
	I0603 05:47:19.745713   10844 system_pods.go:86] 12 kube-system pods found
	I0603 05:47:19.745713   10844 system_pods.go:89] "coredns-7db6d8ff4d-4hrc6" [a3523f27-9775-4c1f-812f-a667faa1bace] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "etcd-multinode-316400" [8509d96a-4449-4656-8237-d194d2980506] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kindnet-2g66r" [3e88e85f-e61e-427f-944a-97b0ba90d219] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kindnet-4hpsl" [6815ff24-537b-42f3-b8ee-4c3e13be89f7] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kindnet-789v5" [d3147209-4266-4963-a4a6-05a024412c7b] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-apiserver-multinode-316400" [1c07a75f-fb00-4529-a699-378974ce494b] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-controller-manager-multinode-316400" [e821ebb1-cbc3-4ac5-8840-e066992422b0] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-proxy-dl97g" [78665ab7-c6dd-4381-8b29-75df4d31eff1] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-proxy-ks64x" [60c8f253-7e07-4f56-b1f2-e0032ac6a8ce] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-proxy-z26hc" [983da576-c697-4bdd-8908-93ec5b571787] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "kube-scheduler-multinode-316400" [b60616c7-ff08-4274-9dd9-601b5e4201bb] Running
	I0603 05:47:19.745713   10844 system_pods.go:89] "storage-provisioner" [bbd73e44-9a7e-4b5f-93e5-d1621c837baa] Running
	I0603 05:47:19.745713   10844 system_pods.go:126] duration metric: took 11.3549ms to wait for k8s-apps to be running ...
	I0603 05:47:19.745713   10844 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 05:47:19.756674   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:47:19.782100   10844 system_svc.go:56] duration metric: took 36.3864ms WaitForService to wait for kubelet
	I0603 05:47:19.782100   10844 kubeadm.go:576] duration metric: took 1m14.5362368s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:47:19.782100   10844 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:47:19.782303   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes
	I0603 05:47:19.782303   10844 round_trippers.go:469] Request Headers:
	I0603 05:47:19.782303   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:47:19.782303   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:47:19.786813   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:47:19.786850   10844 round_trippers.go:577] Response Headers:
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:19 GMT
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Audit-Id: 6a9eee4d-325a-45a9-be62-e3006bdc5c5d
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:47:19.786850   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:47:19.786850   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:47:19.786850   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:47:19.786850   10844 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16255 chars]
	I0603 05:47:19.787943   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:47:19.787943   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:47:19.787943   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:47:19.787943   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:47:19.787943   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:47:19.787943   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:47:19.787943   10844 node_conditions.go:105] duration metric: took 5.6975ms to run NodePressure ...
	I0603 05:47:19.787943   10844 start.go:240] waiting for startup goroutines ...
	I0603 05:47:19.787943   10844 start.go:245] waiting for cluster config update ...
	I0603 05:47:19.787943   10844 start.go:254] writing updated cluster config ...
	I0603 05:47:19.792158   10844 out.go:177] 
	I0603 05:47:19.794354   10844 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:47:19.805336   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:47:19.805336   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:47:19.811364   10844 out.go:177] * Starting "multinode-316400-m02" worker node in "multinode-316400" cluster
	I0603 05:47:19.815354   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:47:19.815354   10844 cache.go:56] Caching tarball of preloaded images
	I0603 05:47:19.816352   10844 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:47:19.816352   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:47:19.816352   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:47:19.818350   10844 start.go:360] acquireMachinesLock for multinode-316400-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:47:19.818350   10844 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-316400-m02"
	I0603 05:47:19.818350   10844 start.go:96] Skipping create...Using existing machine configuration
	I0603 05:47:19.818350   10844 fix.go:54] fixHost starting: m02
	I0603 05:47:19.819344   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:22.081876   10844 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:47:22.082897   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:22.082897   10844 fix.go:112] recreateIfNeeded on multinode-316400-m02: state=Stopped err=<nil>
	W0603 05:47:22.083091   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 05:47:22.086873   10844 out.go:177] * Restarting existing hyperv VM for "multinode-316400-m02" ...
	I0603 05:47:22.090352   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400-m02
	I0603 05:47:25.177596   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:25.177745   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:25.177745   10844 main.go:141] libmachine: Waiting for host to start...
	I0603 05:47:25.177790   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:27.516419   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:27.516419   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:27.516419   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:30.077582   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:30.077582   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:31.078225   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:33.355128   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:33.355128   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:33.355904   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:35.905456   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:35.905456   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:36.913898   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:39.176413   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:39.176413   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:39.177413   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:41.755387   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:41.755427   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:42.761024   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:45.071712   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:45.071712   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:45.072525   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:47.686340   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:47:47.686340   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:48.692467   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:50.978637   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:50.978637   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:50.978822   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:53.613203   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:47:53.613203   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:53.616126   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:47:55.784628   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:47:55.784628   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:55.785211   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:47:58.445682   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:47:58.445682   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:47:58.446801   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:47:58.449014   10844 machine.go:94] provisionDockerMachine start ...
	I0603 05:47:58.449014   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:00.666433   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:00.666433   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:00.667076   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:03.331108   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:03.331762   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:03.338508   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:03.339260   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:03.339260   10844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 05:48:03.470788   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 05:48:03.470788   10844 buildroot.go:166] provisioning hostname "multinode-316400-m02"
	I0603 05:48:03.470903   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:05.649556   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:05.649556   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:05.650153   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:08.274779   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:08.274779   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:08.280851   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:08.281014   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:08.281014   10844 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-316400-m02 && echo "multinode-316400-m02" | sudo tee /etc/hostname
	I0603 05:48:08.428162   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-316400-m02
	
	I0603 05:48:08.428162   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:10.699606   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:10.700197   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:10.700197   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:13.389916   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:13.390110   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:13.395698   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:13.396393   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:13.396393   10844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-316400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-316400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-316400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 05:48:13.547970   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 05:48:13.547970   10844 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 05:48:13.548521   10844 buildroot.go:174] setting up certificates
	I0603 05:48:13.548521   10844 provision.go:84] configureAuth start
	I0603 05:48:13.548521   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:15.748465   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:15.748711   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:15.748711   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:18.332996   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:18.333904   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:18.333904   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:20.484982   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:20.484982   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:20.486701   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:23.049799   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:23.050676   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:23.050676   10844 provision.go:143] copyHostCerts
	I0603 05:48:23.050846   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0603 05:48:23.050846   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 05:48:23.050846   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 05:48:23.051663   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 05:48:23.052829   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0603 05:48:23.053142   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 05:48:23.053142   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 05:48:23.053460   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 05:48:23.054495   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0603 05:48:23.054908   10844 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 05:48:23.054908   10844 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 05:48:23.055434   10844 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 05:48:23.057051   10844 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-316400-m02 san=[127.0.0.1 172.17.91.9 localhost minikube multinode-316400-m02]
	I0603 05:48:23.193883   10844 provision.go:177] copyRemoteCerts
	I0603 05:48:23.208162   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 05:48:23.208162   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:25.424419   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:25.424419   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:25.424618   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:28.040370   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:28.040370   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:28.040799   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:48:28.149364   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9411837s)
	I0603 05:48:28.149364   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 05:48:28.150063   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 05:48:28.198182   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 05:48:28.198603   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0603 05:48:28.245379   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 05:48:28.245683   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 05:48:28.292188   10844 provision.go:87] duration metric: took 14.7436125s to configureAuth
	I0603 05:48:28.292285   10844 buildroot.go:189] setting minikube options for container-runtime
	I0603 05:48:28.292916   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:48:28.293003   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:30.448803   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:30.448803   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:30.449848   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:33.050710   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:33.050710   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:33.057619   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:33.057986   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:33.057986   10844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 05:48:33.200799   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 05:48:33.200954   10844 buildroot.go:70] root file system type: tmpfs
	I0603 05:48:33.201163   10844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 05:48:33.201227   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:35.362575   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:35.362575   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:35.362844   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:37.974012   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:37.974012   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:37.979346   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:37.979742   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:37.979900   10844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.95.88"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 05:48:38.135723   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.95.88
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 05:48:38.135723   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:40.335497   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:40.335641   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:40.335641   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:42.919471   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:42.919471   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:42.925214   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:42.925740   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:42.925740   10844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 05:48:45.226379   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 05:48:45.226489   10844 machine.go:97] duration metric: took 46.7772462s to provisionDockerMachine
	I0603 05:48:45.226489   10844 start.go:293] postStartSetup for "multinode-316400-m02" (driver="hyperv")
	I0603 05:48:45.226568   10844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 05:48:45.241815   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 05:48:45.241815   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:47.428646   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:47.428646   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:47.428744   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:50.025859   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:50.025859   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:50.026193   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:48:50.138638   10844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8968048s)
	I0603 05:48:50.152701   10844 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 05:48:50.160195   10844 command_runner.go:130] > NAME=Buildroot
	I0603 05:48:50.160377   10844 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 05:48:50.160377   10844 command_runner.go:130] > ID=buildroot
	I0603 05:48:50.160377   10844 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 05:48:50.160377   10844 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 05:48:50.160463   10844 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 05:48:50.160499   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 05:48:50.160898   10844 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 05:48:50.161878   10844 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 05:48:50.161878   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /etc/ssl/certs/73642.pem
	I0603 05:48:50.172632   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 05:48:50.201911   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 05:48:50.255528   10844 start.go:296] duration metric: took 5.0289406s for postStartSetup
	I0603 05:48:50.255528   10844 fix.go:56] duration metric: took 1m30.4368433s for fixHost
	I0603 05:48:50.255528   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:52.492419   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:52.493398   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:52.493398   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:55.138723   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:55.139728   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:55.145249   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:55.145962   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:55.145962   10844 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 05:48:55.274953   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418935.277931907
	
	I0603 05:48:55.274953   10844 fix.go:216] guest clock: 1717418935.277931907
	I0603 05:48:55.274953   10844 fix.go:229] Guest: 2024-06-03 05:48:55.277931907 -0700 PDT Remote: 2024-06-03 05:48:50.255528 -0700 PDT m=+301.525318401 (delta=5.022403907s)
	I0603 05:48:55.275139   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:48:57.443591   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:48:57.443591   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:57.443728   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:48:59.950446   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:48:59.950446   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:48:59.967909   10844 main.go:141] libmachine: Using SSH client type: native
	I0603 05:48:59.968575   10844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.91.9 22 <nil> <nil>}
	I0603 05:48:59.968575   10844 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717418935
	I0603 05:49:00.114262   10844 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:48:55 UTC 2024
	
	I0603 05:49:00.114363   10844 fix.go:236] clock set: Mon Jun  3 12:48:55 UTC 2024
	 (err=<nil>)
	I0603 05:49:00.114363   10844 start.go:83] releasing machines lock for "multinode-316400-m02", held for 1m40.2956418s
	I0603 05:49:00.114572   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:02.234097   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:02.237638   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:02.237722   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:04.718851   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:04.718851   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:04.730665   10844 out.go:177] * Found network options:
	I0603 05:49:04.737150   10844 out.go:177]   - NO_PROXY=172.17.95.88
	W0603 05:49:04.743341   10844 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 05:49:04.745019   10844 out.go:177]   - NO_PROXY=172.17.95.88
	W0603 05:49:04.750280   10844 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 05:49:04.751601   10844 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 05:49:04.754611   10844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 05:49:04.755144   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:04.763656   10844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 05:49:04.763656   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:06.978121   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:06.978121   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:06.978440   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:06.981691   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:06.981745   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:06.981892   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:09.640036   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:09.640036   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:09.640442   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:49:09.673237   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:09.673237   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:09.673843   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:49:09.730668   10844 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0603 05:49:09.736701   10844 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9730256s)
	W0603 05:49:09.736959   10844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 05:49:09.748391   10844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 05:49:09.836734   10844 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 05:49:09.836791   10844 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0819933s)
	I0603 05:49:09.836843   10844 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 05:49:09.836941   10844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 05:49:09.836941   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:49:09.837157   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:49:09.875932   10844 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 05:49:09.885983   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 05:49:09.918464   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 05:49:09.938501   10844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 05:49:09.952161   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 05:49:09.987310   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:49:10.023512   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 05:49:10.054653   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 05:49:10.089787   10844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 05:49:10.120953   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 05:49:10.150956   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 05:49:10.181682   10844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 05:49:10.216356   10844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 05:49:10.241134   10844 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 05:49:10.251875   10844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 05:49:10.283072   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:10.488010   10844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 05:49:10.522433   10844 start.go:494] detecting cgroup driver to use...
	I0603 05:49:10.538331   10844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 05:49:10.561454   10844 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 05:49:10.561573   10844 command_runner.go:130] > [Unit]
	I0603 05:49:10.561573   10844 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 05:49:10.561625   10844 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 05:49:10.561625   10844 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 05:49:10.561691   10844 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 05:49:10.561691   10844 command_runner.go:130] > StartLimitBurst=3
	I0603 05:49:10.561691   10844 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 05:49:10.561756   10844 command_runner.go:130] > [Service]
	I0603 05:49:10.561756   10844 command_runner.go:130] > Type=notify
	I0603 05:49:10.561756   10844 command_runner.go:130] > Restart=on-failure
	I0603 05:49:10.561823   10844 command_runner.go:130] > Environment=NO_PROXY=172.17.95.88
	I0603 05:49:10.561823   10844 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 05:49:10.561902   10844 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 05:49:10.561902   10844 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 05:49:10.561902   10844 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 05:49:10.561998   10844 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 05:49:10.561998   10844 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 05:49:10.561998   10844 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 05:49:10.561998   10844 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 05:49:10.562097   10844 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 05:49:10.562097   10844 command_runner.go:130] > ExecStart=
	I0603 05:49:10.562157   10844 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 05:49:10.562157   10844 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 05:49:10.562227   10844 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 05:49:10.562227   10844 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 05:49:10.562294   10844 command_runner.go:130] > LimitNOFILE=infinity
	I0603 05:49:10.562294   10844 command_runner.go:130] > LimitNPROC=infinity
	I0603 05:49:10.562294   10844 command_runner.go:130] > LimitCORE=infinity
	I0603 05:49:10.562360   10844 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 05:49:10.562360   10844 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 05:49:10.562360   10844 command_runner.go:130] > TasksMax=infinity
	I0603 05:49:10.562360   10844 command_runner.go:130] > TimeoutStartSec=0
	I0603 05:49:10.562360   10844 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 05:49:10.562360   10844 command_runner.go:130] > Delegate=yes
	I0603 05:49:10.562360   10844 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 05:49:10.562360   10844 command_runner.go:130] > KillMode=process
	I0603 05:49:10.562360   10844 command_runner.go:130] > [Install]
	I0603 05:49:10.562360   10844 command_runner.go:130] > WantedBy=multi-user.target
	I0603 05:49:10.577467   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:49:10.608695   10844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 05:49:10.654614   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 05:49:10.688461   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:49:10.728285   10844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 05:49:10.793137   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 05:49:10.817940   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 05:49:10.863069   10844 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 05:49:10.874688   10844 ssh_runner.go:195] Run: which cri-dockerd
	I0603 05:49:10.882131   10844 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 05:49:10.892486   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 05:49:10.911746   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 05:49:10.954511   10844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 05:49:11.144475   10844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 05:49:11.325909   10844 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 05:49:11.326022   10844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 05:49:11.371944   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:11.570302   10844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 05:49:14.135923   10844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5656111s)
	I0603 05:49:14.147984   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 05:49:14.184481   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:49:14.221951   10844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 05:49:14.415388   10844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 05:49:14.622075   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:14.816359   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 05:49:14.860866   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 05:49:14.892485   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:15.079416   10844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 05:49:15.193252   10844 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 05:49:15.205956   10844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 05:49:15.212570   10844 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 05:49:15.212570   10844 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 05:49:15.212570   10844 command_runner.go:130] > Device: 0,22	Inode: 854         Links: 1
	I0603 05:49:15.212570   10844 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 05:49:15.212570   10844 command_runner.go:130] > Access: 2024-06-03 12:49:15.111881513 +0000
	I0603 05:49:15.212570   10844 command_runner.go:130] > Modify: 2024-06-03 12:49:15.111881513 +0000
	I0603 05:49:15.212570   10844 command_runner.go:130] > Change: 2024-06-03 12:49:15.114881530 +0000
	I0603 05:49:15.212570   10844 command_runner.go:130] >  Birth: -
	I0603 05:49:15.216853   10844 start.go:562] Will wait 60s for crictl version
	I0603 05:49:15.229461   10844 ssh_runner.go:195] Run: which crictl
	I0603 05:49:15.236729   10844 command_runner.go:130] > /usr/bin/crictl
	I0603 05:49:15.256025   10844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 05:49:15.313213   10844 command_runner.go:130] > Version:  0.1.0
	I0603 05:49:15.313213   10844 command_runner.go:130] > RuntimeName:  docker
	I0603 05:49:15.313213   10844 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 05:49:15.313213   10844 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 05:49:15.313817   10844 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 05:49:15.324990   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:49:15.354571   10844 command_runner.go:130] > 26.0.2
	I0603 05:49:15.365374   10844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 05:49:15.393158   10844 command_runner.go:130] > 26.0.2
	I0603 05:49:15.398082   10844 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 05:49:15.400685   10844 out.go:177]   - env NO_PROXY=172.17.95.88
	I0603 05:49:15.404180   10844 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 05:49:15.409714   10844 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:ec:f0 Flags:up|broadcast|multicast|running}
	I0603 05:49:15.413324   10844 ip.go:210] interface addr: fe80::e3df:1330:e4d5:da29/64
	I0603 05:49:15.413324   10844 ip.go:210] interface addr: 172.17.80.1/20
	I0603 05:49:15.429278   10844 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0603 05:49:15.431950   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:49:15.456370   10844 mustload.go:65] Loading cluster: multinode-316400
	I0603 05:49:15.456536   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:15.457770   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:17.562243   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:17.573709   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:17.573709   10844 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:49:17.574620   10844 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400 for IP: 172.17.91.9
	I0603 05:49:17.574620   10844 certs.go:194] generating shared ca certs ...
	I0603 05:49:17.574620   10844 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 05:49:17.575254   10844 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0603 05:49:17.575670   10844 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0603 05:49:17.576058   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 05:49:17.576421   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 05:49:17.576421   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 05:49:17.576421   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 05:49:17.577171   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem (1338 bytes)
	W0603 05:49:17.577171   10844 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364_empty.pem, impossibly tiny 0 bytes
	I0603 05:49:17.577171   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 05:49:17.577710   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0603 05:49:17.577941   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 05:49:17.578248   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0603 05:49:17.578653   10844 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem (1708 bytes)
	I0603 05:49:17.579018   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem -> /usr/share/ca-certificates/7364.pem
	I0603 05:49:17.579180   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> /usr/share/ca-certificates/73642.pem
	I0603 05:49:17.579324   10844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:17.579482   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 05:49:17.631236   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 05:49:17.680481   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 05:49:17.745777   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 05:49:17.792222   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\7364.pem --> /usr/share/ca-certificates/7364.pem (1338 bytes)
	I0603 05:49:17.831392   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /usr/share/ca-certificates/73642.pem (1708 bytes)
	I0603 05:49:17.885838   10844 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 05:49:17.939129   10844 ssh_runner.go:195] Run: openssl version
	I0603 05:49:17.952972   10844 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 05:49:17.967564   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 05:49:18.004003   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.009514   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.012306   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.022162   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 05:49:18.026375   10844 command_runner.go:130] > b5213941
	I0603 05:49:18.043670   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 05:49:18.075704   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7364.pem && ln -fs /usr/share/ca-certificates/7364.pem /etc/ssl/certs/7364.pem"
	I0603 05:49:18.106909   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.109224   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.113754   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 10:58 /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.123984   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7364.pem
	I0603 05:49:18.133227   10844 command_runner.go:130] > 51391683
	I0603 05:49:18.146681   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7364.pem /etc/ssl/certs/51391683.0"
	I0603 05:49:18.177866   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73642.pem && ln -fs /usr/share/ca-certificates/73642.pem /etc/ssl/certs/73642.pem"
	I0603 05:49:18.209994   10844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.213215   10844 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.213215   10844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 10:58 /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.218931   10844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73642.pem
	I0603 05:49:18.230842   10844 command_runner.go:130] > 3ec20f2e
	I0603 05:49:18.249230   10844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73642.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 05:49:18.277306   10844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 05:49:18.284259   10844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:49:18.288414   10844 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 05:49:18.288737   10844 kubeadm.go:928] updating node {m02 172.17.91.9 8443 v1.30.1 docker false true} ...
	I0603 05:49:18.288985   10844 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-316400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.91.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 05:49:18.299487   10844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 05:49:18.320708   10844 command_runner.go:130] > kubeadm
	I0603 05:49:18.320708   10844 command_runner.go:130] > kubectl
	I0603 05:49:18.320708   10844 command_runner.go:130] > kubelet
	I0603 05:49:18.320708   10844 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 05:49:18.332329   10844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0603 05:49:18.350964   10844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 05:49:18.383162   10844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 05:49:18.426655   10844 ssh_runner.go:195] Run: grep 172.17.95.88	control-plane.minikube.internal$ /etc/hosts
	I0603 05:49:18.428927   10844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 05:49:18.463976   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:18.655362   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:49:18.682591   10844 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:49:18.686530   10844 start.go:316] joinCluster: &{Name:multinode-316400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-316400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.88 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.87.60 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 05:49:18.686658   10844 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:18.686721   10844 host.go:66] Checking if "multinode-316400-m02" exists ...
	I0603 05:49:18.686947   10844 mustload.go:65] Loading cluster: multinode-316400
	I0603 05:49:18.688092   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:18.688779   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:20.856197   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:20.856197   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:20.856197   10844 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:49:20.858384   10844 api_server.go:166] Checking apiserver status ...
	I0603 05:49:20.871249   10844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:49:20.871249   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:23.036998   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:23.036998   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:23.037280   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:25.581615   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:49:25.581615   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:25.585695   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:49:25.705225   10844 command_runner.go:130] > 1862
	I0603 05:49:25.705225   10844 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.8339572s)
	I0603 05:49:25.717194   10844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1862/cgroup
	W0603 05:49:25.736329   10844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1862/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 05:49:25.747732   10844 ssh_runner.go:195] Run: ls
	I0603 05:49:25.759177   10844 api_server.go:253] Checking apiserver healthz at https://172.17.95.88:8443/healthz ...
	I0603 05:49:25.765473   10844 api_server.go:279] https://172.17.95.88:8443/healthz returned 200:
	ok
	I0603 05:49:25.777102   10844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-316400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0603 05:49:25.932078   10844 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-789v5, kube-system/kube-proxy-z26hc
	I0603 05:49:28.961323   10844 command_runner.go:130] > node/multinode-316400-m02 cordoned
	I0603 05:49:28.961374   10844 command_runner.go:130] > pod "busybox-fc5497c4f-hmxqp" has DeletionTimestamp older than 1 seconds, skipping
	I0603 05:49:28.961412   10844 command_runner.go:130] > node/multinode-316400-m02 drained
	I0603 05:49:28.961412   10844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-316400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1842987s)
	I0603 05:49:28.961412   10844 node.go:128] successfully drained node "multinode-316400-m02"
	I0603 05:49:28.961412   10844 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0603 05:49:28.961412   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:49:31.096075   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:31.096075   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:31.107314   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:33.701759   10844 main.go:141] libmachine: [stdout =====>] : 172.17.91.9
	
	I0603 05:49:33.701963   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:33.701963   10844 sshutil.go:53] new ssh client: &{IP:172.17.91.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:49:34.170937   10844 command_runner.go:130] ! W0603 12:49:34.176875    1540 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0603 05:49:34.699261   10844 command_runner.go:130] ! W0603 12:49:34.704908    1540 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 0994b46a73710b77f0a814bb946c1582e328418dabcdbfe77e547a83bd77a0ce: output: E0603 12:49:34.393381    1579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-hmxqp_default\" network: cni config uninitialized" podSandboxID="0994b46a73710b77f0a814bb946c1582e328418dabcdbfe77e547a83bd77a0ce"
	I0603 05:49:34.699261   10844 command_runner.go:130] ! time="2024-06-03T12:49:34Z" level=fatal msg="stopping the pod sandbox \"0994b46a73710b77f0a814bb946c1582e328418dabcdbfe77e547a83bd77a0ce\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-hmxqp_default\" network: cni config uninitialized"
	I0603 05:49:34.699261   10844 command_runner.go:130] ! : exit status 1
	I0603 05:49:34.725187   10844 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Stopping the kubelet service
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0603 05:49:34.725187   10844 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0603 05:49:34.725187   10844 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0603 05:49:34.725187   10844 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0603 05:49:34.725187   10844 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0603 05:49:34.725187   10844 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0603 05:49:34.725187   10844 command_runner.go:130] > to reset your system's IPVS tables.
	I0603 05:49:34.725187   10844 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0603 05:49:34.725187   10844 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0603 05:49:34.725187   10844 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.7637527s)
	I0603 05:49:34.725187   10844 node.go:155] successfully reset node "multinode-316400-m02"
	I0603 05:49:34.726605   10844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:49:34.727376   10844 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.95.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:49:34.728791   10844 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 05:49:34.729245   10844 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0603 05:49:34.729328   10844 round_trippers.go:463] DELETE https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:34.729394   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:34.729394   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:34.729394   10844 round_trippers.go:473]     Content-Type: application/json
	I0603 05:49:34.729394   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:34.745054   10844 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 05:49:34.745054   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:34.745054   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:34.745054   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Content-Length: 171
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:34 GMT
	I0603 05:49:34.745054   10844 round_trippers.go:580]     Audit-Id: 3873f445-5b68-4d5c-a635-4ffa42a6e4c2
	I0603 05:49:34.745054   10844 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-316400-m02","kind":"nodes","uid":"d68db3cb-6ccf-4f6c-9a68-5fc69a4d3136"}}
	I0603 05:49:34.745054   10844 node.go:180] successfully deleted node "multinode-316400-m02"
	I0603 05:49:34.745054   10844 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:34.745054   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 05:49:34.745054   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:49:36.852948   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:36.852948   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:36.853149   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:39.367798   10844 main.go:141] libmachine: [stdout =====>] : 172.17.95.88
	
	I0603 05:49:39.368018   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:39.368187   10844 sshutil.go:53] new ssh client: &{IP:172.17.95.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:49:39.554752   10844 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yb71c2.xo9vol9vszz2kqx7 --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 
	I0603 05:49:39.554752   10844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8096798s)
	I0603 05:49:39.554752   10844 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:39.554752   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yb71c2.xo9vol9vszz2kqx7 --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-316400-m02"
	I0603 05:49:39.762907   10844 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 05:49:41.615948   10844 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 05:49:41.615948   10844 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0603 05:49:41.615948   10844 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0603 05:49:41.615948   10844 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 05:49:41.615948   10844 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 505.331217ms
	I0603 05:49:41.616081   10844 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0603 05:49:41.616157   10844 command_runner.go:130] > This node has joined the cluster:
	I0603 05:49:41.616157   10844 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0603 05:49:41.616157   10844 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0603 05:49:41.616225   10844 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0603 05:49:41.616225   10844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yb71c2.xo9vol9vszz2kqx7 --discovery-token-ca-cert-hash sha256:93074109dc0351ee72cc8e3b75bb1e072c3a08962436c219451bc8bad5b2c2b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-316400-m02": (2.0614656s)
	I0603 05:49:41.616322   10844 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 05:49:41.822049   10844 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0603 05:49:42.035385   10844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-316400-m02 minikube.k8s.io/updated_at=2024_06_03T05_49_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8 minikube.k8s.io/name=multinode-316400 minikube.k8s.io/primary=false
	I0603 05:49:42.158406   10844 command_runner.go:130] > node/multinode-316400-m02 labeled
	I0603 05:49:42.158520   10844 start.go:318] duration metric: took 23.4719694s to joinCluster
	I0603 05:49:42.158713   10844 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.91.9 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 05:49:42.161447   10844 out.go:177] * Verifying Kubernetes components...
	I0603 05:49:42.159509   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:42.173773   10844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 05:49:42.368715   10844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 05:49:42.394241   10844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 05:49:42.394850   10844 kapi.go:59] client config for multinode-316400: &rest.Config{Host:"https://172.17.95.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-316400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x212d8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 05:49:42.395703   10844 node_ready.go:35] waiting up to 6m0s for node "multinode-316400-m02" to be "Ready" ...
	I0603 05:49:42.395869   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:42.395941   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:42.395941   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:42.395941   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:42.396182   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:42.396182   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Audit-Id: d8c749ae-4814-4b84-8902-12d268e26370
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:42.399931   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:42.399931   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:42.399931   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:42 GMT
	I0603 05:49:42.400133   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2096","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3563 chars]
	I0603 05:49:42.897461   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:42.897700   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:42.897700   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:42.897777   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:42.906010   10844 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 05:49:42.906067   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:42.906104   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:42.906104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:42.906104   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:42.906153   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:42 GMT
	I0603 05:49:42.906153   10844 round_trippers.go:580]     Audit-Id: 05bb6fd4-de17-4aa7-a2e5-2202c5bedbb4
	I0603 05:49:42.906189   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:42.909322   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2096","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3563 chars]
	I0603 05:49:43.397782   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:43.397782   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:43.397782   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:43.397782   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:43.402349   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:49:43.402349   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:43.402349   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:43.402349   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:43 GMT
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Audit-Id: 1338c209-a00c-42ee-a21c-edadda92c1e5
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:43.402349   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:43.402349   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:43.899939   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:43.900175   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:43.900175   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:43.900175   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:43.900979   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:43.906217   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:43.906217   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:43.906217   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:43 GMT
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Audit-Id: 3d8fbded-525b-46b9-b728-c2dc9c943698
	I0603 05:49:43.906217   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:43.906505   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:44.396688   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:44.396727   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:44.396727   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:44.396727   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:44.397390   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:44.397390   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:44.397390   10844 round_trippers.go:580]     Audit-Id: 6fb6719b-9dd9-4bb8-9021-53bb90f7e450
	I0603 05:49:44.400755   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:44.400755   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:44.400755   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:44.400755   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:44.400813   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:44 GMT
	I0603 05:49:44.401066   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:44.401433   10844 node_ready.go:53] node "multinode-316400-m02" has status "Ready":"False"
	I0603 05:49:44.904994   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:44.905092   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:44.905092   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:44.905092   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:44.905364   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:44.909320   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:44.909320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:44.909320   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:44.909412   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:44 GMT
	I0603 05:49:44.909412   10844 round_trippers.go:580]     Audit-Id: d3a9d372-c76f-4189-b7d3-44c2b405af28
	I0603 05:49:44.909676   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:44.909676   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:44.909770   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2104","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3672 chars]
	I0603 05:49:45.410634   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:45.410883   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.410883   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.410883   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.411655   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.411655   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.411655   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.415633   10844 round_trippers.go:580]     Audit-Id: 00ce39cb-d49d-4f91-817c-d53ac3fa186b
	I0603 05:49:45.415633   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.415633   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.415633   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.415633   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.415753   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2120","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3930 chars]
	I0603 05:49:45.416242   10844 node_ready.go:49] node "multinode-316400-m02" has status "Ready":"True"
	I0603 05:49:45.416311   10844 node_ready.go:38] duration metric: took 3.020596s for node "multinode-316400-m02" to be "Ready" ...
	I0603 05:49:45.416311   10844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:49:45.416515   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods
	I0603 05:49:45.416548   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.416548   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.416548   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.417243   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.417243   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.417243   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Audit-Id: f7c56023-9148-4a9d-acaa-840d53030101
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.417243   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.417243   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.423182   10844 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2122"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86024 chars]
	I0603 05:49:45.427677   10844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.427677   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hrc6
	I0603 05:49:45.427677   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.427677   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.427677   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.428910   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:49:45.428910   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Audit-Id: 5a3795cd-40fc-408b-b70c-0a2710cead91
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.428910   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.428910   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.428910   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.431883   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4hrc6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a3523f27-9775-4c1f-812f-a667faa1bace","resourceVersion":"1931","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"825d2a6c-fdde-4bd1-830f-8b953ad1437d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"825d2a6c-fdde-4bd1-830f-8b953ad1437d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0603 05:49:45.432593   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.432593   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.432593   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.432593   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.433195   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.433195   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.433195   10844 round_trippers.go:580]     Audit-Id: 082c65f7-87a9-4ebd-a987-57e708a740f0
	I0603 05:49:45.435976   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.435976   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.435976   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.435976   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.435976   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.436113   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.436113   10844 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.436113   10844 pod_ready.go:81] duration metric: took 8.4365ms for pod "coredns-7db6d8ff4d-4hrc6" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.436113   10844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.436866   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-316400
	I0603 05:49:45.436866   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.436866   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.436866   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.437673   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.446287   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.446287   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.446287   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.446287   10844 round_trippers.go:580]     Audit-Id: 90a18fc8-c241-415f-9c9c-c71f861fd851
	I0603 05:49:45.446520   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-316400","namespace":"kube-system","uid":"8509d96a-4449-4656-8237-d194d2980506","resourceVersion":"1822","creationTimestamp":"2024-06-03T12:46:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.95.88:2379","kubernetes.io/config.hash":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.mirror":"a77247d80dfdd462b8863b85ab8ad4bb","kubernetes.io/config.seen":"2024-06-03T12:45:54.833437335Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0603 05:49:45.446579   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.446579   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.446579   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.446579   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.447424   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.447424   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.447424   10844 round_trippers.go:580]     Audit-Id: 1fd7a857-3f08-49c5-b7ae-959c201290fa
	I0603 05:49:45.447424   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.447424   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.447424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.447424   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.449766   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.450164   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.450468   10844 pod_ready.go:92] pod "etcd-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.450468   10844 pod_ready.go:81] duration metric: took 14.3544ms for pod "etcd-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.450468   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.450468   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-316400
	I0603 05:49:45.450468   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.450468   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.450468   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.451786   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.451786   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.451786   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.451786   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Audit-Id: 7db2269e-f6a7-4bc5-8297-a1b2a6ef4016
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.453889   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.454244   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-316400","namespace":"kube-system","uid":"1c07a75f-fb00-4529-a699-378974ce494b","resourceVersion":"1830","creationTimestamp":"2024-06-03T12:46:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.95.88:8443","kubernetes.io/config.hash":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.mirror":"29e4294fa112526de08d5737962f6330","kubernetes.io/config.seen":"2024-06-03T12:45:54.794125775Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:46:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0603 05:49:45.454804   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.454880   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.454880   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.454880   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.459322   10844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 05:49:45.459322   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.459322   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.459322   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Audit-Id: dfd9a18c-1737-40d1-a6da-d3d242d6ae0d
	I0603 05:49:45.459322   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.459976   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.459976   10844 pod_ready.go:92] pod "kube-apiserver-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.459976   10844 pod_ready.go:81] duration metric: took 9.5085ms for pod "kube-apiserver-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.459976   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.460506   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-316400
	I0603 05:49:45.460506   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.460506   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.460506   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.462665   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:49:45.462665   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Audit-Id: fc26c92e-023f-4ba6-91de-cd7534a68bcc
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.462665   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.462665   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.462665   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.462665   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-316400","namespace":"kube-system","uid":"e821ebb1-cbc3-4ac5-8840-e066992422b0","resourceVersion":"1827","creationTimestamp":"2024-06-03T12:23:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.mirror":"53c1415900cfae2b2544e26360f8c9e2","kubernetes.io/config.seen":"2024-06-03T12:23:04.224060021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0603 05:49:45.464900   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:45.464900   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.464900   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.464900   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.467621   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:49:45.467621   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.467621   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.467878   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Audit-Id: 36a90c04-c89f-4867-bf8c-431f216e2fcb
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.467878   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.467878   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:45.468521   10844 pod_ready.go:92] pod "kube-controller-manager-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:45.468521   10844 pod_ready.go:81] duration metric: took 8.5451ms for pod "kube-controller-manager-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.468521   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:45.614161   10844 request.go:629] Waited for 145.3885ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:49:45.614259   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dl97g
	I0603 05:49:45.614259   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.614259   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.614338   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.615028   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.620267   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.620267   10844 round_trippers.go:580]     Audit-Id: 04e3436e-5a68-4df6-b2b2-571a7f7b2132
	I0603 05:49:45.620342   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.620342   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.620342   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.620342   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.620342   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.620342   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dl97g","generateName":"kube-proxy-","namespace":"kube-system","uid":"78665ab7-c6dd-4381-8b29-75df4d31eff1","resourceVersion":"1713","creationTimestamp":"2024-06-03T12:30:58Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:30:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0603 05:49:45.816763   10844 request.go:629] Waited for 195.4242ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:49:45.817044   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m03
	I0603 05:49:45.817044   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:45.817044   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:45.817044   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:45.820793   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:45.820920   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:45.820920   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:45 GMT
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Audit-Id: 11958586-c4d0-48ae-b9b9-84d6750e3875
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:45.820920   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:45.820920   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:45.821110   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m03","uid":"39dbcb4e-fdeb-4463-8bde-9cfa6cead308","resourceVersion":"1870","creationTimestamp":"2024-06-03T12:41:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_41_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:41:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0603 05:49:45.821640   10844 pod_ready.go:97] node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:49:45.821640   10844 pod_ready.go:81] duration metric: took 353.1169ms for pod "kube-proxy-dl97g" in "kube-system" namespace to be "Ready" ...
	E0603 05:49:45.821640   10844 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-316400-m03" hosting pod "kube-proxy-dl97g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-316400-m03" has status "Ready":"Unknown"
	I0603 05:49:45.821702   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.032115   10844 request.go:629] Waited for 210.3253ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:49:46.032300   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ks64x
	I0603 05:49:46.032300   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.032300   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.032300   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.042131   10844 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 05:49:46.042131   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.042131   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.042131   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.042131   10844 round_trippers.go:580]     Audit-Id: 62566718-8d4b-4699-a4cc-7886732694dd
	I0603 05:49:46.042781   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ks64x","generateName":"kube-proxy-","namespace":"kube-system","uid":"60c8f253-7e07-4f56-b1f2-e0032ac6a8ce","resourceVersion":"1752","creationTimestamp":"2024-06-03T12:23:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0603 05:49:46.221415   10844 request.go:629] Waited for 177.7458ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:46.221591   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:46.221633   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.221662   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.221662   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.240234   10844 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0603 05:49:46.240234   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Audit-Id: 1396cfb1-a9a5-43a5-975b-490df236ae25
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.240234   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.240234   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.240234   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.240234   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:46.241041   10844 pod_ready.go:92] pod "kube-proxy-ks64x" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:46.241041   10844 pod_ready.go:81] duration metric: took 419.3381ms for pod "kube-proxy-ks64x" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.241041   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.416386   10844 request.go:629] Waited for 175.2773ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:49:46.416668   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z26hc
	I0603 05:49:46.416668   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.416668   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.416668   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.417465   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:46.417465   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.417465   10844 round_trippers.go:580]     Audit-Id: c90f8487-609e-453d-8ef5-8fa13630e6f3
	I0603 05:49:46.417465   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.417465   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.417465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.417465   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.421002   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.421235   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z26hc","generateName":"kube-proxy-","namespace":"kube-system","uid":"983da576-c697-4bdd-8908-93ec5b571787","resourceVersion":"2109","creationTimestamp":"2024-06-03T12:26:17Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:26:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49e2013-d0c6-4358-b5e0-0b51a21b9cf4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5827 chars]
	I0603 05:49:46.617230   10844 request.go:629] Waited for 195.6847ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:46.617428   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400-m02
	I0603 05:49:46.617428   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.617515   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.617515   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.620015   10844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 05:49:46.620015   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Audit-Id: 204ab35f-7933-4708-b919-db41266f7ff0
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.621915   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.621915   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.621915   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.622196   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400-m02","uid":"7e6a03a9-b766-478c-8a60-89762baf32b3","resourceVersion":"2120","creationTimestamp":"2024-06-03T12:49:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T05_49_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:49:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3930 chars]
	I0603 05:49:46.622196   10844 pod_ready.go:92] pod "kube-proxy-z26hc" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:46.622196   10844 pod_ready.go:81] duration metric: took 381.1527ms for pod "kube-proxy-z26hc" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.622781   10844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:46.822893   10844 request.go:629] Waited for 199.9125ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:49:46.823241   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-316400
	I0603 05:49:46.823241   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:46.823241   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:46.823241   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:46.827121   10844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 05:49:46.827121   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Audit-Id: ce2c5bb2-51ad-4b20-98e6-24f26c42614f
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:46.827121   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:46.827121   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:46.827121   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:46 GMT
	I0603 05:49:46.827490   10844 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-316400","namespace":"kube-system","uid":"b60616c7-ff08-4274-9dd9-601b5e4201bb","resourceVersion":"1854","creationTimestamp":"2024-06-03T12:23:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.mirror":"392dbbcc275890dd2b6fadbfc5aaee27","kubernetes.io/config.seen":"2024-06-03T12:22:56.267037488Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0603 05:49:47.032112   10844 request.go:629] Waited for 203.7651ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:47.032226   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes/multinode-316400
	I0603 05:49:47.032226   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:47.032226   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:47.032226   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:47.032656   10844 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 05:49:47.036414   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:47.036414   10844 round_trippers.go:580]     Audit-Id: e724a5ed-d3e2-446f-a725-b40e1e16f1b8
	I0603 05:49:47.036414   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:47.036475   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:47.036475   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:47.036475   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:47.036475   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:47 GMT
	I0603 05:49:47.037014   10844 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T12:23:00Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0603 05:49:47.037223   10844 pod_ready.go:92] pod "kube-scheduler-multinode-316400" in "kube-system" namespace has status "Ready":"True"
	I0603 05:49:47.037223   10844 pod_ready.go:81] duration metric: took 414.4404ms for pod "kube-scheduler-multinode-316400" in "kube-system" namespace to be "Ready" ...
	I0603 05:49:47.037223   10844 pod_ready.go:38] duration metric: took 1.6209055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 05:49:47.037223   10844 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 05:49:47.048053   10844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:49:47.076257   10844 system_svc.go:56] duration metric: took 39.0339ms WaitForService to wait for kubelet
	I0603 05:49:47.076387   10844 kubeadm.go:576] duration metric: took 4.9175675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 05:49:47.076417   10844 node_conditions.go:102] verifying NodePressure condition ...
	I0603 05:49:47.219101   10844 request.go:629] Waited for 142.474ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.88:8443/api/v1/nodes
	I0603 05:49:47.219280   10844 round_trippers.go:463] GET https://172.17.95.88:8443/api/v1/nodes
	I0603 05:49:47.219280   10844 round_trippers.go:469] Request Headers:
	I0603 05:49:47.219280   10844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 05:49:47.219280   10844 round_trippers.go:473]     Accept: application/json, */*
	I0603 05:49:47.221237   10844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 05:49:47.221237   10844 round_trippers.go:577] Response Headers:
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Content-Type: application/json
	I0603 05:49:47.224899   10844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bd719fb4-bf5f-4d27-85ab-61f44d2bc7b5
	I0603 05:49:47.224899   10844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 26cda572-8670-411d-81d1-3a6dda50571a
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:49:47 GMT
	I0603 05:49:47.224899   10844 round_trippers.go:580]     Audit-Id: cb212733-8a10-4a2a-a0e9-e149c1518781
	I0603 05:49:47.225867   10844 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2126"},"items":[{"metadata":{"name":"multinode-316400","uid":"48665121-db99-4b5b-ba6e-d701ddd58b24","resourceVersion":"1893","creationTimestamp":"2024-06-03T12:23:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-316400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"599070631c2216ebc936292d491e4fe10e15b9d8","minikube.k8s.io/name":"multinode-316400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T05_23_05_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15603 chars]
	I0603 05:49:47.226390   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:49:47.226390   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:49:47.226390   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:49:47.226390   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:49:47.226390   10844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 05:49:47.226390   10844 node_conditions.go:123] node cpu capacity is 2
	I0603 05:49:47.226390   10844 node_conditions.go:105] duration metric: took 149.9718ms to run NodePressure ...
	I0603 05:49:47.226390   10844 start.go:240] waiting for startup goroutines ...
	I0603 05:49:47.227911   10844 start.go:254] writing updated cluster config ...
	I0603 05:49:47.232345   10844 out.go:177] 
	I0603 05:49:47.235167   10844 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:47.243999   10844 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:49:47.243999   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:49:47.250527   10844 out.go:177] * Starting "multinode-316400-m03" worker node in "multinode-316400" cluster
	I0603 05:49:47.250914   10844 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 05:49:47.250914   10844 cache.go:56] Caching tarball of preloaded images
	I0603 05:49:47.253175   10844 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 05:49:47.253175   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 05:49:47.253175   10844 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-316400\config.json ...
	I0603 05:49:47.258993   10844 start.go:360] acquireMachinesLock for multinode-316400-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 05:49:47.258993   10844 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-316400-m03"
	I0603 05:49:47.258993   10844 start.go:96] Skipping create...Using existing machine configuration
	I0603 05:49:47.258993   10844 fix.go:54] fixHost starting: m03
	I0603 05:49:47.259555   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	I0603 05:49:49.323790   10844 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:49:49.334695   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:49.334695   10844 fix.go:112] recreateIfNeeded on multinode-316400-m03: state=Stopped err=<nil>
	W0603 05:49:49.334695   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 05:49:49.338755   10844 out.go:177] * Restarting existing hyperv VM for "multinode-316400-m03" ...
	I0603 05:49:49.341412   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-316400-m03
	I0603 05:49:52.380595   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:49:52.385198   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:52.385198   10844 main.go:141] libmachine: Waiting for host to start...
	I0603 05:49:52.385271   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	I0603 05:49:54.655816   10844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:49:54.666064   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:54.666064   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 05:49:57.202789   10844 main.go:141] libmachine: [stdout =====>] : 
	I0603 05:49:57.202789   10844 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:49:58.219033   10844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	
	
	==> Docker <==
	Jun 03 12:47:09 multinode-316400 dockerd[1048]: 2024/06/03 12:47:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:12 multinode-316400 dockerd[1048]: 2024/06/03 12:47:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:13 multinode-316400 dockerd[1048]: 2024/06/03 12:47:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:16 multinode-316400 dockerd[1048]: 2024/06/03 12:47:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:17 multinode-316400 dockerd[1048]: 2024/06/03 12:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:17 multinode-316400 dockerd[1048]: 2024/06/03 12:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:47:17 multinode-316400 dockerd[1048]: 2024/06/03 12:47:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c57e529e14789       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   6bf8343e76a7e       busybox-fc5497c4f-pm79t
	4241e2ff2dfe8       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   f91f85c4c9180       coredns-7db6d8ff4d-4hrc6
	e1365acc9d8f5       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   776fb3e0c2be1       storage-provisioner
	3a08a76e2a79b       ac1c61439df46                                                                                         4 minutes ago       Running             kindnet-cni               1                   3fb9a5291cc42       kindnet-4hpsl
	eeba3616d7005       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   776fb3e0c2be1       storage-provisioner
	09616a16042d3       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                1                   5e8f89dffdc8e       kube-proxy-ks64x
	a9b10f4d479ac       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   61b2e6f87def8       kube-apiserver-multinode-316400
	ef3c014848675       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   942fe3bc13ce6       etcd-multinode-316400
	334bb0174b55e       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            1                   5938c827a45b5       kube-scheduler-multinode-316400
	cbaa09a85a643       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   1                   31bce861be7b7       kube-controller-manager-multinode-316400
	ec31816ada18f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   87702037798e9       busybox-fc5497c4f-pm79t
	8280b39046781       cbb01a7bd410d                                                                                         27 minutes ago      Exited              coredns                   0                   d4b4a69fc5b72       coredns-7db6d8ff4d-4hrc6
	a00a9dc2a937f       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              27 minutes ago      Exited              kindnet-cni               0                   53f366fa802e0       kindnet-4hpsl
	ad08c7b8f3aff       747097150317f                                                                                         27 minutes ago      Exited              kube-proxy                0                   0ab8fbb688dfe       kube-proxy-ks64x
	f39be6db7a1f8       a52dc94f0a912                                                                                         27 minutes ago      Exited              kube-scheduler            0                   a24225992b633       kube-scheduler-multinode-316400
	3d7dc29a57912       25a1387cdab82                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   bf22fe6661544       kube-controller-manager-multinode-316400
	
	
	==> coredns [4241e2ff2dfe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = d518b2f22d7013b4ce33ee954d9f8802810eac8bed02a1cf0be20d76208a6f83258316421f15df605ab13f1704501370ffcd7655fbac5818a200880248c94b94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56422 - 9876 "HINFO IN 206560838863428655.1450761119047549818. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.131379968s
	
	
	==> coredns [8280b3904678] <==
	[INFO] 10.244.0.3:42814 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000783s
	[INFO] 10.244.0.3:56125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193798s
	[INFO] 10.244.0.3:33604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000681s
	[INFO] 10.244.0.3:43179 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152098s
	[INFO] 10.244.0.3:37734 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183099s
	[INFO] 10.244.0.3:40712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065399s
	[INFO] 10.244.0.3:57849 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143498s
	[INFO] 10.244.1.2:55369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220898s
	[INFO] 10.244.1.2:47639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156398s
	[INFO] 10.244.1.2:60680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117399s
	[INFO] 10.244.1.2:44347 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.001372486s
	[INFO] 10.244.0.3:47771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111999s
	[INFO] 10.244.0.3:36325 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	[INFO] 10.244.0.3:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137599s
	[INFO] 10.244.0.3:48065 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144598s
	[INFO] 10.244.1.2:51116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198198s
	[INFO] 10.244.1.2:48621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000370096s
	[INFO] 10.244.1.2:43942 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109399s
	[INFO] 10.244.1.2:37489 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084899s
	[INFO] 10.244.0.3:57190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217998s
	[INFO] 10.244.0.3:50064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174399s
	[INFO] 10.244.0.3:60160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000595s
	[INFO] 10.244.0.3:35078 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000136799s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-316400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-316400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-316400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T05_23_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:23:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-316400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:50:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:46:41 +0000   Mon, 03 Jun 2024 12:46:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.95.88
	  Hostname:    multinode-316400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 babca97119de4d6fa999cc452dbf962d
	  System UUID:                2c702ef9-a339-1f48-92d3-793ba74e8cf0
	  Boot ID:                    081e28f7-22a7-44c3-8f7f-5efab2cb6c1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pm79t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-4hrc6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-316400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 kindnet-4hpsl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-316400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-controller-manager-multinode-316400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-ks64x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-316400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-316400 status is now: NodeReady
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m37s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m37s)  kubelet          Node multinode-316400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m37s)  kubelet          Node multinode-316400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m18s                  node-controller  Node multinode-316400 event: Registered Node multinode-316400 in Controller
	
	
	Name:               multinode-316400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-316400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-316400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T05_49_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:49:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-316400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:50:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:49:45 +0000   Mon, 03 Jun 2024 12:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:49:45 +0000   Mon, 03 Jun 2024 12:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:49:45 +0000   Mon, 03 Jun 2024 12:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:49:45 +0000   Mon, 03 Jun 2024 12:49:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.91.9
	  Hostname:    multinode-316400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9ce1ab3ce424831add544031d86ef5a
	  System UUID:                ec79485d-21c4-6145-8e57-c09e4fdf577c
	  Boot ID:                    e94edef0-648a-4e94-b85f-e40351ff1f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rxphf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-789v5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-z26hc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node multinode-316400-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  51s (x2 over 51s)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x2 over 51s)  kubelet          Node multinode-316400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x2 over 51s)  kubelet          Node multinode-316400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           48s                node-controller  Node multinode-316400-m02 event: Registered Node multinode-316400-m02 in Controller
	  Normal  NodeReady                46s                kubelet          Node multinode-316400-m02 status is now: NodeReady
	
	
	Name:               multinode-316400-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-316400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=599070631c2216ebc936292d491e4fe10e15b9d8
	                    minikube.k8s.io/name=multinode-316400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T05_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:41:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-316400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:42:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 12:41:36 +0000   Mon, 03 Jun 2024 12:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.87.60
	  Hostname:    multinode-316400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc656517670545aaaa7c7a25b2f64753
	  System UUID:                a308abc0-c931-7443-ad98-10f05edbe0d1
	  Boot ID:                    e0354f7a-df63-4468-a6a7-c994e7630072
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2g66r       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-dl97g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m1s                 kube-proxy       
	  Normal  Starting                 19m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)    kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)    kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)    kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                  kubelet          Node multinode-316400-m03 status is now: NodeReady
	  Normal  Starting                 9m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m4s (x2 over 9m4s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m4s (x2 over 9m4s)  kubelet          Node multinode-316400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m4s (x2 over 9m4s)  kubelet          Node multinode-316400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m3s                 node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	  Normal  NodeReady                8m55s                kubelet          Node multinode-316400-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m18s                node-controller  Node multinode-316400-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m18s                node-controller  Node multinode-316400-m03 event: Registered Node multinode-316400-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.534473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.760285] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.738299] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.337396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:45] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.170051] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[ +27.020755] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	[  +0.098354] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.547871] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[  +0.203768] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	[  +0.236273] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	[  +2.922970] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.212840] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.211978] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.272281] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +0.897361] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.100992] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.175568] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +1.304893] kauditd_printk_skb: 44 callbacks suppressed
	[Jun 3 12:46] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.658985] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +7.567408] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [ef3c01484867] <==
	{"level":"info","ts":"2024-06-03T12:45:57.047865Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T12:45:57.047886Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T12:45:57.048259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 switched to configuration voters=(2461051450677544552)"}
	{"level":"info","ts":"2024-06-03T12:45:57.048351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","added-peer-id":"2227694153984668","added-peer-peer-urls":["https://172.17.87.47:2380"]}
	{"level":"info","ts":"2024-06-03T12:45:57.048469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59e9e3bd07d1204a","local-member-id":"2227694153984668","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:45:57.048554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:45:57.062256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T12:45:57.062576Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2227694153984668","initial-advertise-peer-urls":["https://172.17.95.88:2380"],"listen-peer-urls":["https://172.17.95.88:2380"],"advertise-client-urls":["https://172.17.95.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.95.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T12:45:57.062655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T12:45:57.062696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.95.88:2380"}
	{"level":"info","ts":"2024-06-03T12:45:57.062709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.95.88:2380"}
	{"level":"info","ts":"2024-06-03T12:45:58.793198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T12:45:58.793257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:45:58.79336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgPreVoteResp from 2227694153984668 at term 2"}
	{"level":"info","ts":"2024-06-03T12:45:58.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T12:45:58.79343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 received MsgVoteResp from 2227694153984668 at term 3"}
	{"level":"info","ts":"2024-06-03T12:45:58.793456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2227694153984668 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T12:45:58.793469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2227694153984668 elected leader 2227694153984668 at term 3"}
	{"level":"info","ts":"2024-06-03T12:45:58.803759Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2227694153984668","local-member-attributes":"{Name:multinode-316400 ClientURLs:[https://172.17.95.88:2379]}","request-path":"/0/members/2227694153984668/attributes","cluster-id":"59e9e3bd07d1204a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:45:58.803778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:45:58.804055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:45:58.805057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:45:58.805235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:45:58.807124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.95.88:2379"}
	
	
	==> kernel <==
	 12:50:32 up 6 min,  0 users,  load average: 1.76, 0.72, 0.30
	Linux multinode-316400 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3a08a76e2a79] <==
	I0603 12:49:43.751156       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:49:53.763302       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 12:49:53.763528       1 main.go:227] handling current node
	I0603 12:49:53.763544       1 main.go:223] Handling node with IPs: map[172.17.91.9:{}]
	I0603 12:49:53.763551       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:49:53.764434       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:49:53.764534       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:50:03.775164       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 12:50:03.775277       1 main.go:227] handling current node
	I0603 12:50:03.775293       1 main.go:223] Handling node with IPs: map[172.17.91.9:{}]
	I0603 12:50:03.775300       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:50:03.775793       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:50:03.775874       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:50:13.791812       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 12:50:13.791901       1 main.go:227] handling current node
	I0603 12:50:13.791917       1 main.go:223] Handling node with IPs: map[172.17.91.9:{}]
	I0603 12:50:13.791925       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:50:13.792049       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:50:13.792082       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:50:23.799684       1 main.go:223] Handling node with IPs: map[172.17.95.88:{}]
	I0603 12:50:23.799900       1 main.go:227] handling current node
	I0603 12:50:23.799934       1 main.go:223] Handling node with IPs: map[172.17.91.9:{}]
	I0603 12:50:23.799977       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:50:23.800182       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:50:23.800412       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a00a9dc2a937] <==
	I0603 12:42:39.493080       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:42:49.510208       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:42:49.510320       1 main.go:227] handling current node
	I0603 12:42:49.510337       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:42:49.510345       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:42:49.510502       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:42:49.510850       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:42:59.524960       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:42:59.525065       1 main.go:227] handling current node
	I0603 12:42:59.525082       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:42:59.525090       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:42:59.525213       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:42:59.525244       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:43:09.540131       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:43:09.540253       1 main.go:227] handling current node
	I0603 12:43:09.540269       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:43:09.540277       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:43:09.540823       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:43:09.540933       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	I0603 12:43:19.547744       1 main.go:223] Handling node with IPs: map[172.17.87.47:{}]
	I0603 12:43:19.547868       1 main.go:227] handling current node
	I0603 12:43:19.547881       1 main.go:223] Handling node with IPs: map[172.17.94.201:{}]
	I0603 12:43:19.547887       1 main.go:250] Node multinode-316400-m02 has CIDR [10.244.1.0/24] 
	I0603 12:43:19.548098       1 main.go:223] Handling node with IPs: map[172.17.87.60:{}]
	I0603 12:43:19.548109       1 main.go:250] Node multinode-316400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a9b10f4d479a] <==
	I0603 12:46:00.455613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 12:46:00.469239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 12:46:00.471358       1 aggregator.go:165] initial CRD sync complete...
	I0603 12:46:00.471790       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 12:46:00.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 12:46:00.472206       1 cache.go:39] Caches are synced for autoregister controller
	I0603 12:46:00.495677       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 12:46:00.495925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 12:46:00.495948       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 12:46:00.496039       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 12:46:00.496071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 12:46:00.506247       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 12:46:00.508040       1 policy_source.go:224] refreshing policies
	I0603 12:46:00.509489       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 12:46:00.517149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 12:46:01.342295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0603 12:46:01.980289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.87.47 172.17.95.88]
	I0603 12:46:01.985303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 12:46:02.001181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 12:46:03.152173       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 12:46:03.367764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 12:46:03.420648       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 12:46:03.586830       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 12:46:03.597792       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0603 12:46:21.953303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.95.88]
	
	
	==> kube-controller-manager [3d7dc29a5791] <==
	I0603 12:26:17.962940       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 12:26:17.992381       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 12:26:18.134186       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m02"
	I0603 12:26:36.973341       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:27:03.162045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.481081ms"
	I0603 12:27:03.200275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.173688ms"
	I0603 12:27:03.200832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128µs"
	I0603 12:27:03.212471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.1µs"
	I0603 12:27:03.240136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.3µs"
	I0603 12:27:06.015302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.091372ms"
	I0603 12:27:06.015849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.898µs"
	I0603 12:27:06.270719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.850823ms"
	I0603 12:27:06.272105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0603 12:30:58.224321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:30:58.226994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 12:30:58.246674       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.2.0/24"]
	I0603 12:31:03.218074       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-316400-m03"
	I0603 12:31:17.451951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:38:48.355018       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:41:21.867121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:41:27.622412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m03\" does not exist"
	I0603 12:41:27.622570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:41:27.656130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m03" podCIDRs=["10.244.3.0/24"]
	I0603 12:41:36.163530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:43:13.716339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	
	
	==> kube-controller-manager [cbaa09a85a64] <==
	I0603 12:46:13.536134       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 12:46:41.320053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:46:53.164917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.36569ms"
	I0603 12:46:53.165094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.2µs"
	I0603 12:47:06.773655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.701µs"
	I0603 12:47:06.840796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.603045ms"
	I0603 12:47:06.914342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.101µs"
	I0603 12:47:06.955417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.538311ms"
	I0603 12:47:06.955873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.8µs"
	I0603 12:49:26.001061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.911639ms"
	I0603 12:49:26.017107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.969398ms"
	I0603 12:49:26.036235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.067317ms"
	I0603 12:49:26.036429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0603 12:49:40.684028       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-316400-m02\" does not exist"
	I0603 12:49:40.711616       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-316400-m02" podCIDRs=["10.244.1.0/24"]
	I0603 12:49:42.608915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.2µs"
	I0603 12:49:45.327650       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-316400-m02"
	I0603 12:49:45.363608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.201µs"
	I0603 12:49:56.694974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.1µs"
	I0603 12:49:56.709267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.7µs"
	I0603 12:49:56.730754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.4µs"
	I0603 12:49:56.821251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="203.202µs"
	I0603 12:49:56.827020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.101µs"
	I0603 12:49:57.870847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.304408ms"
	I0603 12:49:57.872074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.301µs"
	
	
	==> kube-proxy [09616a16042d] <==
	I0603 12:46:02.911627       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:46:02.969369       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.95.88"]
	I0603 12:46:03.097595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:46:03.097638       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:46:03.097656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:46:03.100839       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:46:03.102842       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:46:03.104091       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:46:03.107664       1 config.go:192] "Starting service config controller"
	I0603 12:46:03.108761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:46:03.109017       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:46:03.109106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:46:03.117240       1 config.go:319] "Starting node config controller"
	I0603 12:46:03.119259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:46:03.209595       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:46:03.209810       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:46:03.219914       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ad08c7b8f3af] <==
	I0603 12:23:20.546493       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:23:20.568576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.87.47"]
	I0603 12:23:20.659257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:23:20.659393       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:23:20.659415       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:23:20.663456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:23:20.664643       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:23:20.664662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:23:20.666528       1 config.go:192] "Starting service config controller"
	I0603 12:23:20.666581       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:23:20.666609       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:23:20.666615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:23:20.667612       1 config.go:319] "Starting node config controller"
	I0603 12:23:20.667941       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:23:20.767105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:23:20.767300       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:23:20.768158       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [334bb0174b55] <==
	I0603 12:45:58.086336       1 serving.go:380] Generated self-signed cert in-memory
	W0603 12:46:00.380399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 12:46:00.380684       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 12:46:00.380884       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 12:46:00.381107       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 12:46:00.453904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 12:46:00.453991       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:46:00.464075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 12:46:00.464177       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 12:46:00.464196       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 12:46:00.464265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 12:46:00.568947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f39be6db7a1f] <==
	E0603 12:23:01.873977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:23:01.875277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:23:01.875315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:23:01.916341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:01.916447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:01.921821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:23:01.921933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:23:01.948084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:01.948298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:02.015926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:02.016396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:02.068872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:23:02.069079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:23:02.185191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:23:02.185330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:23:02.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:23:02.305617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:23:02.376410       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:23:02.377064       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 12:23:02.451005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:23:02.451429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:23:02.561713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:23:02.561749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 12:23:04.563581       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 12:43:27.858508       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 12:46:44 multinode-316400 kubelet[1519]: I0603 12:46:44.933444    1519 scope.go:117] "RemoveContainer" containerID="eeba3616d700535427e0ceb1938da282c280a8880c3115e99f2833de00a11ffc"
	Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.925925    1519 scope.go:117] "RemoveContainer" containerID="8c884e5bfb9610572eb767230d7b640de4fcb6546fc3b8695e8656d6eb0ea163"
	Jun 03 12:46:54 multinode-316400 kubelet[1519]: E0603 12:46:54.975420    1519 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:46:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:46:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:46:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:46:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:46:54 multinode-316400 kubelet[1519]: I0603 12:46:54.978150    1519 scope.go:117] "RemoveContainer" containerID="29c39ff8468f2c769565bdfbccd358cbcd64984d79001fc53a07e38b87bf6345"
	Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.682232    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8343e76a7efe90b07cd80686a37a1009d84cebe1e8c037ddff6ab573da4b5"
	Jun 03 12:47:05 multinode-316400 kubelet[1519]: I0603 12:47:05.704345    1519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f85c4c9180652f1a9bcc24b14bfb687b59e4ca165b54c2eadb72b56b67aa9"
	Jun 03 12:47:54 multinode-316400 kubelet[1519]: E0603 12:47:54.968065    1519 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:47:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:47:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:47:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:47:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:48:54 multinode-316400 kubelet[1519]: E0603 12:48:54.967261    1519 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:48:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:48:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:48:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:48:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:49:54 multinode-316400 kubelet[1519]: E0603 12:49:54.968892    1519 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:49:54 multinode-316400 kubelet[1519]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:49:54 multinode-316400 kubelet[1519]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:49:54 multinode-316400 kubelet[1519]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:49:54 multinode-316400 kubelet[1519]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:50:21.160315   10904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-316400 -n multinode-316400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-316400 -n multinode-316400: (11.9237491s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-316400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (520.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (1470.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
E0603 06:12:10.861409    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (6m11.2813797s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-776200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-776200: (34.7327289s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-776200 status --format={{.Host}}
E0603 06:18:39.526018    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-776200 status --format={{.Host}}: exit status 7 (2.4719703s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:18:38.273889   12200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (6m50.1893144s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-776200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (245.5446ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-776200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:25:31.163846    5408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-776200
	    minikube start -p kubernetes-upgrade-776200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7762002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-776200 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0603 06:27:10.862235    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (6m31.9708391s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-776200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-776200" primary control-plane node in "kubernetes-upgrade-776200" cluster
	* Updating the running hyperv "kubernetes-upgrade-776200" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:25:31.403100    1152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 06:25:31.405096    1152 out.go:291] Setting OutFile to fd 1680 ...
	I0603 06:25:31.405096    1152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 06:25:31.405096    1152 out.go:304] Setting ErrFile to fd 1760...
	I0603 06:25:31.405096    1152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 06:25:31.439621    1152 out.go:298] Setting JSON to false
	I0603 06:25:31.444208    1152 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10359,"bootTime":1717410772,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 06:25:31.444208    1152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 06:25:31.448210    1152 out.go:177] * [kubernetes-upgrade-776200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 06:25:31.452076    1152 notify.go:220] Checking for updates...
	I0603 06:25:31.454969    1152 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 06:25:31.457410    1152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 06:25:31.460773    1152 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 06:25:31.463589    1152 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 06:25:31.466147    1152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 06:25:31.470346    1152 config.go:182] Loaded profile config "kubernetes-upgrade-776200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 06:25:31.472618    1152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 06:25:38.300720    1152 out.go:177] * Using the hyperv driver based on existing profile
	I0603 06:25:38.305002    1152 start.go:297] selected driver: hyperv
	I0603 06:25:38.305002    1152 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-776200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-776200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.90 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 06:25:38.305247    1152 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 06:25:38.356588    1152 cni.go:84] Creating CNI manager for ""
	I0603 06:25:38.356644    1152 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 06:25:38.356875    1152 start.go:340] cluster config:
	{Name:kubernetes-upgrade-776200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-776200 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.90.90 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 06:25:38.357177    1152 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 06:25:38.361338    1152 out.go:177] * Starting "kubernetes-upgrade-776200" primary control-plane node in "kubernetes-upgrade-776200" cluster
	I0603 06:25:38.364772    1152 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 06:25:38.364772    1152 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 06:25:38.364772    1152 cache.go:56] Caching tarball of preloaded images
	I0603 06:25:38.364772    1152 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 06:25:38.365796    1152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 06:25:38.365796    1152 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-776200\config.json ...
	I0603 06:25:38.369244    1152 start.go:360] acquireMachinesLock for kubernetes-upgrade-776200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 06:29:35.455118    1152 start.go:364] duration metric: took 3m57.0848899s to acquireMachinesLock for "kubernetes-upgrade-776200"
	I0603 06:29:35.455527    1152 start.go:96] Skipping create...Using existing machine configuration
	I0603 06:29:35.455679    1152 fix.go:54] fixHost starting: 
	I0603 06:29:35.456245    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:29:37.632661    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:29:37.632661    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:37.632661    1152 fix.go:112] recreateIfNeeded on kubernetes-upgrade-776200: state=Running err=<nil>
	W0603 06:29:37.632832    1152 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 06:29:37.636675    1152 out.go:177] * Updating the running hyperv "kubernetes-upgrade-776200" VM ...
	I0603 06:29:37.648964    1152 machine.go:94] provisionDockerMachine start ...
	I0603 06:29:37.650467    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:29:39.785019    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:29:39.785019    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:39.785019    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:29:42.413437    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:29:42.413437    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:42.424044    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:29:42.424882    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:29:42.424934    1152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 06:29:42.561988    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-776200
	
	I0603 06:29:42.561988    1152 buildroot.go:166] provisioning hostname "kubernetes-upgrade-776200"
	I0603 06:29:42.561988    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:29:44.798591    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:29:44.811695    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:44.811899    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:29:47.395908    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:29:47.395908    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:47.406534    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:29:47.407197    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:29:47.407197    1152 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-776200 && echo "kubernetes-upgrade-776200" | sudo tee /etc/hostname
	I0603 06:29:47.579347    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-776200
	
	I0603 06:29:47.579555    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:29:49.646572    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:29:49.646572    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:49.646738    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:29:52.284134    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:29:52.284665    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:52.291012    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:29:52.291675    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:29:52.291675    1152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-776200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-776200/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-776200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 06:29:52.441301    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 06:29:52.441383    1152 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 06:29:52.441383    1152 buildroot.go:174] setting up certificates
	I0603 06:29:52.441461    1152 provision.go:84] configureAuth start
	I0603 06:29:52.441521    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:29:54.795264    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:29:54.795264    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:54.795264    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:29:57.392153    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:29:57.403675    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:57.403675    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:29:59.514829    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:29:59.514829    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:29:59.514829    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:02.084569    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:02.093263    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:02.093263    1152 provision.go:143] copyHostCerts
	I0603 06:30:02.093894    1152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0603 06:30:02.093894    1152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0603 06:30:02.094077    1152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0603 06:30:02.095882    1152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0603 06:30:02.095882    1152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0603 06:30:02.096188    1152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 06:30:02.097539    1152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0603 06:30:02.097539    1152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0603 06:30:02.097901    1152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0603 06:30:02.098995    1152 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-776200 san=[127.0.0.1 172.17.90.90 kubernetes-upgrade-776200 localhost minikube]
	I0603 06:30:02.217569    1152 provision.go:177] copyRemoteCerts
	I0603 06:30:02.233095    1152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 06:30:02.233095    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:04.338097    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:04.338097    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:04.338097    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:06.909169    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:06.909169    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:06.909633    1152 sshutil.go:53] new ssh client: &{IP:172.17.90.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-776200\id_rsa Username:docker}
	I0603 06:30:07.017270    1152 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7841557s)
	I0603 06:30:07.017416    1152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0603 06:30:07.066591    1152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0603 06:30:07.118524    1152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 06:30:07.164491    1152 provision.go:87] duration metric: took 14.7229732s to configureAuth
	I0603 06:30:07.164491    1152 buildroot.go:189] setting minikube options for container-runtime
	I0603 06:30:07.169465    1152 config.go:182] Loaded profile config "kubernetes-upgrade-776200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 06:30:07.169465    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:09.344241    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:09.344241    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:09.344241    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:11.867835    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:11.867835    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:11.885982    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:30:11.886611    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:30:11.886611    1152 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 06:30:12.017336    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 06:30:12.017423    1152 buildroot.go:70] root file system type: tmpfs
	I0603 06:30:12.017662    1152 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 06:30:12.017662    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:14.151346    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:14.163041    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:14.163247    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:16.760901    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:16.760901    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:16.767850    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:30:16.767940    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:30:16.767940    1152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 06:30:16.935949    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 06:30:16.935949    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:19.109357    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:19.109408    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:19.109458    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:21.726416    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:21.726416    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:21.736770    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:30:21.737563    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:30:21.737621    1152 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 06:30:21.881427    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 06:30:21.881554    1152 machine.go:97] duration metric: took 44.2308553s to provisionDockerMachine
	I0603 06:30:21.881554    1152 start.go:293] postStartSetup for "kubernetes-upgrade-776200" (driver="hyperv")
	I0603 06:30:21.881591    1152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 06:30:21.894893    1152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 06:30:21.894945    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:24.078217    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:24.078217    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:24.078217    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:26.754382    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:26.754672    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:26.754672    1152 sshutil.go:53] new ssh client: &{IP:172.17.90.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-776200\id_rsa Username:docker}
	I0603 06:30:26.876989    1152 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9819583s)
	I0603 06:30:26.886716    1152 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 06:30:26.896322    1152 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 06:30:26.896444    1152 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0603 06:30:26.896817    1152 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0603 06:30:26.897442    1152 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem -> 73642.pem in /etc/ssl/certs
	I0603 06:30:26.909449    1152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 06:30:26.936240    1152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\73642.pem --> /etc/ssl/certs/73642.pem (1708 bytes)
	I0603 06:30:27.000918    1152 start.go:296] duration metric: took 5.1193075s for postStartSetup
	I0603 06:30:27.001063    1152 fix.go:56] duration metric: took 51.5451832s for fixHost
	I0603 06:30:27.001063    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:29.168449    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:29.168449    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:29.180007    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:31.746429    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:31.746429    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:31.763456    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:30:31.764130    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:30:31.764130    1152 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 06:30:31.900072    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421431.904657201
	
	I0603 06:30:31.900072    1152 fix.go:216] guest clock: 1717421431.904657201
	I0603 06:30:31.900242    1152 fix.go:229] Guest: 2024-06-03 06:30:31.904657201 -0700 PDT Remote: 2024-06-03 06:30:27.0010638 -0700 PDT m=+295.697739301 (delta=4.903593401s)
	I0603 06:30:31.900365    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:34.099411    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:34.099411    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:34.100198    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:36.799854    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:36.799922    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:36.806538    1152 main.go:141] libmachine: Using SSH client type: native
	I0603 06:30:36.807202    1152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.90 22 <nil> <nil>}
	I0603 06:30:36.807292    1152 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421431
	I0603 06:30:36.981386    1152 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:30:31 UTC 2024
	
	I0603 06:30:36.981386    1152 fix.go:236] clock set: Mon Jun  3 13:30:31 UTC 2024
	 (err=<nil>)
	I0603 06:30:36.981386    1152 start.go:83] releasing machines lock for "kubernetes-upgrade-776200", held for 1m1.5259044s
	I0603 06:30:36.981386    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:39.326916    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:39.333492    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:39.333580    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:42.000757    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:42.000757    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:42.005650    1152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 06:30:42.006199    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:42.020516    1152 ssh_runner.go:195] Run: cat /version.json
	I0603 06:30:42.020516    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-776200 ).state
	I0603 06:30:44.440772    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:44.440772    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:44.440772    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:44.487436    1152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:30:44.487436    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:44.495186    1152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-776200 ).networkadapters[0]).ipaddresses[0]
	I0603 06:30:47.401619    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:47.411143    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:47.411143    1152 sshutil.go:53] new ssh client: &{IP:172.17.90.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-776200\id_rsa Username:docker}
	I0603 06:30:47.443988    1152 main.go:141] libmachine: [stdout =====>] : 172.17.90.90
	
	I0603 06:30:47.443988    1152 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:30:47.450109    1152 sshutil.go:53] new ssh client: &{IP:172.17.90.90 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-776200\id_rsa Username:docker}
	I0603 06:30:49.516118    1152 ssh_runner.go:235] Completed: cat /version.json: (7.4955735s)
	I0603 06:30:49.516272    1152 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.5105932s)
	W0603 06:30:49.516272    1152 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0603 06:30:49.516538    1152 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0603 06:30:49.516538    1152 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0603 06:30:49.529441    1152 ssh_runner.go:195] Run: systemctl --version
	I0603 06:30:49.551250    1152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 06:30:49.561695    1152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 06:30:49.573497    1152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0603 06:30:49.605700    1152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0603 06:30:49.633451    1152 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 06:30:49.633538    1152 start.go:494] detecting cgroup driver to use...
	I0603 06:30:49.633824    1152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 06:30:49.684451    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 06:30:49.718109    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 06:30:49.750006    1152 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 06:30:49.762081    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 06:30:49.793865    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 06:30:49.827410    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 06:30:49.867197    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 06:30:49.904776    1152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 06:30:49.944447    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 06:30:49.980628    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 06:30:50.016768    1152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 06:30:50.048089    1152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 06:30:50.078607    1152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 06:30:50.111208    1152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 06:30:50.386209    1152 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 06:30:50.427881    1152 start.go:494] detecting cgroup driver to use...
	I0603 06:30:50.441812    1152 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 06:30:50.479706    1152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 06:30:50.531460    1152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 06:30:50.580541    1152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 06:30:50.626067    1152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 06:30:50.655898    1152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 06:30:50.705357    1152 ssh_runner.go:195] Run: which cri-dockerd
	I0603 06:30:50.725958    1152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 06:30:50.747327    1152 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 06:30:50.793643    1152 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 06:30:51.098614    1152 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 06:30:51.381774    1152 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 06:30:51.381774    1152 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 06:30:51.429474    1152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 06:30:51.700494    1152 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 06:32:03.124748    1152 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4238531s)
	I0603 06:32:03.140577    1152 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 06:32:03.209962    1152 out.go:177] 
	W0603 06:32:03.216280    1152 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 13:24:00 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.180810760Z" level=info msg="Starting up"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.182156477Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.183323292Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.216355906Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247305295Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247430097Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247516298Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247534998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248169306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248270207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248482510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248587311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248612611Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248626212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.249207819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.249991429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.253774676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.253894878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254202382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254320483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254850290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254998492Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.255143693Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.257761026Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258103131Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258134531Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258153031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258170431Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258252232Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258638337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258818240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258974942Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259000842Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259062543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259082843Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259098743Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259115843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259133444Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259157644Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259175544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259189344Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259213345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259229445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259244345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259260545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259275445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259290846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259305746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259321346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259337846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259371147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259465648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259496248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259512748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259532249Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259557549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259573149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259587649Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259721351Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259772352Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259791552Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259813652Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259827052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259847353Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259878053Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260260358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260427460Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260510161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260534361Z" level=info msg="containerd successfully booted in 0.047270s"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.328887948Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.477835801Z" level=info msg="Loading containers: start."
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.847885834Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.938884436Z" level=info msg="Loading containers: done."
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.971905971Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.973412491Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:02 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:02 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:02.034149184Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:02 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:02.034237985Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.589479483Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:24:32 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.591330685Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.591949286Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.592097687Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.592130287Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.681093141Z" level=info msg="Starting up"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.682122543Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.684026646Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1142
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.720582298Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.750985341Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751150541Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751315542Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751345242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751378842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751393642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751643942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751751842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751774342Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751794842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751825242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751966643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755252147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755374947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755588448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755611948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755637248Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755655748Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755668648Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755917848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756071048Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756094948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756112048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756130548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756184649Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756765049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756861150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756882250Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756897550Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756912950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756931350Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756948250Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756963850Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756980150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757085050Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757146750Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757162350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757196850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757216950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757231250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757245950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757259450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757338650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757357950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757375850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757391750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757409050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757422550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757437150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757451150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757470450Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757554650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757578251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757591851Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757670051Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757692451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758092051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758120551Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758135151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758150851Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758163151Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758703452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758767952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758926652Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758992953Z" level=info msg="containerd successfully booted in 0.039266s"
	Jun 03 13:24:34 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:34.728950737Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:34 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:34.761903484Z" level=info msg="Loading containers: start."
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.103076772Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.197259006Z" level=info msg="Loading containers: done."
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.229566352Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.229746252Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.289127537Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.289321237Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:35 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:48 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.498262496Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.500572299Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501204800Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501355700Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501419101Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.595082062Z" level=info msg="Starting up"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.596384764Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.597346665Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1554
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.637730523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.676912779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677085479Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677174079Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677210379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677265279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677616480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678010480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678174581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678211681Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678237781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678350781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678738981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.682462587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.682661587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683009487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683153688Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683207788Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683507788Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683756289Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684155889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684448490Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684599990Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684654990Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684786790Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684968890Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.685593591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.685817091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686021992Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686059692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686092292Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686143692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686196192Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686233292Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686393192Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686532793Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686570393Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686717993Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686953493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687094593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687131693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687164793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687195793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687415494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687548794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687587194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687619394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687653394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687681494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687709394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687739594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687776094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688392695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688495295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688568795Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688701296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688773196Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688934896Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689060696Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689124896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689184596Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689242696Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.691523100Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.691811800Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.692121700Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.692241201Z" level=info msg="containerd successfully booted in 0.058680s"
	Jun 03 13:24:50 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:50.685250118Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:50 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:50.874707789Z" level=info msg="Loading containers: start."
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.160545714Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.244155025Z" level=info msg="Loading containers: done."
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.271378839Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.271606771Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.328219701Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.328785980Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:51 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.696861197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697139223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697159925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697320441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734481192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734705313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734862328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.735231164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.745930686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746101602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746640254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746849574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.780446585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784066131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784128336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784879808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.314759797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315018720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315258342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315565669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.340937345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.341239172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.341488094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.342108550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437221780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437765629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437899141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.438266373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.439897020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.440752796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.443417635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.461736478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.462777629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.463081545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.463113547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.464804139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.730225496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731644573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731662074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731769280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120179999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120514116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120603721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.122784031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290035983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290350590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290554094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290754398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419514220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419662923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419680024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.420632145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.891759070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.892868394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.892984196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.894985440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449673647Z" level=info msg="shim disconnected" id=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449747149Z" level=warning msg="cleaning up after shim disconnected" id=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449758849Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:12.451160778Z" level=info msg="ignoring event" container=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.345167807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347038643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347059344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347173446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:34.344995681Z" level=info msg="ignoring event" container=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.347511510Z" level=info msg="shim disconnected" id=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.348547621Z" level=warning msg="cleaning up after shim disconnected" id=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.348858525Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:34.575614602Z" level=info msg="ignoring event" container=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.578265232Z" level=info msg="shim disconnected" id=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.578964540Z" level=warning msg="cleaning up after shim disconnected" id=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.579053041Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.386774975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.387381181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.387526383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.388843697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.884183244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.885592760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.885645060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.886332468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958527547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958742849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958803750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.959056453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806501667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806702869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806720069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806975871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:39.497215922Z" level=info msg="ignoring event" container=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.498182698Z" level=info msg="shim disconnected" id=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.498835682Z" level=warning msg="cleaning up after shim disconnected" id=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.499456666Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:39.655447197Z" level=info msg="ignoring event" container=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656355874Z" level=info msg="shim disconnected" id=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656590468Z" level=warning msg="cleaning up after shim disconnected" id=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656609268Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:42.771347105Z" level=info msg="ignoring event" container=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772471807Z" level=info msg="shim disconnected" id=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772831708Z" level=warning msg="cleaning up after shim disconnected" id=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772947308Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899568913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899839613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899872714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.900055514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:27:34.023667639Z" level=info msg="ignoring event" container=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024579841Z" level=info msg="shim disconnected" id=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024634141Z" level=warning msg="cleaning up after shim disconnected" id=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024645641Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243459870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243756671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243822071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243969471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:29:24.035906170Z" level=info msg="ignoring event" container=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037135272Z" level=info msg="shim disconnected" id=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037679573Z" level=warning msg="cleaning up after shim disconnected" id=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037775373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.281538979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282024080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282421180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282933281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:30:51 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.727254393Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.934347358Z" level=info msg="shim disconnected" id=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.936059859Z" level=info msg="ignoring event" container=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.940959563Z" level=warning msg="cleaning up after shim disconnected" id=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.941144863Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.955137075Z" level=info msg="shim disconnected" id=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.956970576Z" level=warning msg="cleaning up after shim disconnected" id=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.957124776Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.958191377Z" level=info msg="ignoring event" container=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.995153306Z" level=info msg="ignoring event" container=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.995944407Z" level=info msg="shim disconnected" id=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.996343307Z" level=warning msg="cleaning up after shim disconnected" id=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.996476507Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.007098516Z" level=info msg="shim disconnected" id=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.007310516Z" level=info msg="ignoring event" container=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.008325717Z" level=warning msg="cleaning up after shim disconnected" id=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.008408817Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023889329Z" level=info msg="shim disconnected" id=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023959729Z" level=warning msg="cleaning up after shim disconnected" id=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023974429Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.024296530Z" level=info msg="ignoring event" container=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.025216330Z" level=info msg="ignoring event" container=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026434631Z" level=info msg="shim disconnected" id=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026543031Z" level=warning msg="cleaning up after shim disconnected" id=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026715932Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.058974857Z" level=info msg="ignoring event" container=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.058990157Z" level=info msg="shim disconnected" id=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059118457Z" level=warning msg="cleaning up after shim disconnected" id=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059132857Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.059284257Z" level=info msg="ignoring event" container=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059946558Z" level=info msg="shim disconnected" id=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.060011158Z" level=warning msg="cleaning up after shim disconnected" id=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.060024658Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086739979Z" level=info msg="shim disconnected" id=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086817279Z" level=warning msg="cleaning up after shim disconnected" id=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086832079Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.087179980Z" level=info msg="ignoring event" container=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.096147887Z" level=info msg="ignoring event" container=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.095968687Z" level=info msg="shim disconnected" id=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.097731888Z" level=warning msg="cleaning up after shim disconnected" id=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.097749988Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113794501Z" level=info msg="shim disconnected" id=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113900301Z" level=warning msg="cleaning up after shim disconnected" id=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113914701Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.127906412Z" level=info msg="ignoring event" container=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.128859913Z" level=info msg="ignoring event" container=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130188614Z" level=info msg="shim disconnected" id=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130290114Z" level=warning msg="cleaning up after shim disconnected" id=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130307214Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.283990536Z" level=warning msg="cleanup warnings time=\"2024-06-03T13:30:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:56.933255236Z" level=info msg="ignoring event" container=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.934473837Z" level=info msg="shim disconnected" id=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.940977743Z" level=warning msg="cleaning up after shim disconnected" id=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.941184043Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.869008356Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.918349827Z" level=info msg="shim disconnected" id=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.919174247Z" level=warning msg="cleaning up after shim disconnected" id=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.919316367Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.920732673Z" level=info msg="ignoring event" container=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.998239438Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999682548Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999835470Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:31:02 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999962288Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Consumed 12.832s CPU time.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:31:03 kubernetes-upgrade-776200 dockerd[5506]: time="2024-06-03T13:31:03.090539794Z" level=info msg="Starting up"
	Jun 03 13:32:03 kubernetes-upgrade-776200 dockerd[5506]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 13:24:00 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.180810760Z" level=info msg="Starting up"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.182156477Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.183323292Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.216355906Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247305295Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247430097Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247516298Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247534998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248169306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248270207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248482510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248587311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248612611Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248626212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.249207819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.249991429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.253774676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.253894878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254202382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254320483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254850290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254998492Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.255143693Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.257761026Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258103131Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258134531Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258153031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258170431Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258252232Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258638337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258818240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258974942Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259000842Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259062543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259082843Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259098743Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259115843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259133444Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259157644Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259175544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259189344Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259213345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259229445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259244345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259260545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259275445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259290846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259305746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259321346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259337846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259371147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259465648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259496248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259512748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259532249Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259557549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259573149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259587649Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259721351Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259772352Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259791552Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259813652Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259827052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259847353Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259878053Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260260358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260427460Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260510161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260534361Z" level=info msg="containerd successfully booted in 0.047270s"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.328887948Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.477835801Z" level=info msg="Loading containers: start."
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.847885834Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.938884436Z" level=info msg="Loading containers: done."
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.971905971Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.973412491Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:02 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:02 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:02.034149184Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:02 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:02.034237985Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.589479483Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:24:32 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.591330685Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.591949286Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.592097687Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.592130287Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.681093141Z" level=info msg="Starting up"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.682122543Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.684026646Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1142
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.720582298Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.750985341Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751150541Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751315542Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751345242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751378842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751393642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751643942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751751842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751774342Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751794842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751825242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751966643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755252147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755374947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755588448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755611948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755637248Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755655748Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755668648Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755917848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756071048Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756094948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756112048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756130548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756184649Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756765049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756861150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756882250Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756897550Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756912950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756931350Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756948250Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756963850Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756980150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757085050Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757146750Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757162350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757196850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757216950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757231250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757245950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757259450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757338650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757357950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757375850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757391750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757409050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757422550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757437150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757451150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757470450Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757554650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757578251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757591851Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757670051Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757692451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758092051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758120551Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758135151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758150851Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758163151Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758703452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758767952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758926652Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758992953Z" level=info msg="containerd successfully booted in 0.039266s"
	Jun 03 13:24:34 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:34.728950737Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:34 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:34.761903484Z" level=info msg="Loading containers: start."
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.103076772Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.197259006Z" level=info msg="Loading containers: done."
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.229566352Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.229746252Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.289127537Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.289321237Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:35 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:48 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.498262496Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.500572299Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501204800Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501355700Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501419101Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.595082062Z" level=info msg="Starting up"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.596384764Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.597346665Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1554
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.637730523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.676912779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677085479Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677174079Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677210379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677265279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677616480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678010480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678174581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678211681Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678237781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678350781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678738981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.682462587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.682661587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683009487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683153688Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683207788Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683507788Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683756289Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684155889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684448490Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684599990Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684654990Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684786790Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684968890Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.685593591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.685817091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686021992Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686059692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686092292Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686143692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686196192Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686233292Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686393192Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686532793Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686570393Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686717993Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686953493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687094593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687131693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687164793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687195793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687415494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687548794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687587194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687619394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687653394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687681494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687709394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687739594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687776094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688392695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688495295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688568795Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688701296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688773196Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688934896Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689060696Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689124896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689184596Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689242696Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.691523100Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.691811800Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.692121700Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.692241201Z" level=info msg="containerd successfully booted in 0.058680s"
	Jun 03 13:24:50 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:50.685250118Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:50 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:50.874707789Z" level=info msg="Loading containers: start."
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.160545714Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.244155025Z" level=info msg="Loading containers: done."
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.271378839Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.271606771Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.328219701Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.328785980Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:51 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.696861197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697139223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697159925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697320441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734481192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734705313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734862328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.735231164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.745930686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746101602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746640254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746849574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.780446585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784066131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784128336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784879808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.314759797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315018720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315258342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315565669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.340937345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.341239172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.341488094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.342108550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437221780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437765629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437899141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.438266373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.439897020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.440752796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.443417635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.461736478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.462777629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.463081545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.463113547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.464804139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.730225496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731644573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731662074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731769280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120179999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120514116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120603721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.122784031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290035983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290350590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290554094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290754398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419514220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419662923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419680024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.420632145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.891759070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.892868394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.892984196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.894985440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449673647Z" level=info msg="shim disconnected" id=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449747149Z" level=warning msg="cleaning up after shim disconnected" id=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449758849Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:12.451160778Z" level=info msg="ignoring event" container=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.345167807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347038643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347059344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347173446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:34.344995681Z" level=info msg="ignoring event" container=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.347511510Z" level=info msg="shim disconnected" id=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.348547621Z" level=warning msg="cleaning up after shim disconnected" id=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.348858525Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:34.575614602Z" level=info msg="ignoring event" container=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.578265232Z" level=info msg="shim disconnected" id=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.578964540Z" level=warning msg="cleaning up after shim disconnected" id=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.579053041Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.386774975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.387381181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.387526383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.388843697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.884183244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.885592760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.885645060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.886332468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958527547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958742849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958803750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.959056453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806501667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806702869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806720069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806975871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:39.497215922Z" level=info msg="ignoring event" container=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.498182698Z" level=info msg="shim disconnected" id=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.498835682Z" level=warning msg="cleaning up after shim disconnected" id=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.499456666Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:39.655447197Z" level=info msg="ignoring event" container=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656355874Z" level=info msg="shim disconnected" id=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656590468Z" level=warning msg="cleaning up after shim disconnected" id=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656609268Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:42.771347105Z" level=info msg="ignoring event" container=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772471807Z" level=info msg="shim disconnected" id=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772831708Z" level=warning msg="cleaning up after shim disconnected" id=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772947308Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899568913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899839613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899872714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.900055514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:27:34.023667639Z" level=info msg="ignoring event" container=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024579841Z" level=info msg="shim disconnected" id=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024634141Z" level=warning msg="cleaning up after shim disconnected" id=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024645641Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243459870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243756671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243822071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243969471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:29:24.035906170Z" level=info msg="ignoring event" container=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037135272Z" level=info msg="shim disconnected" id=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037679573Z" level=warning msg="cleaning up after shim disconnected" id=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037775373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.281538979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282024080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282421180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282933281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:30:51 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.727254393Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.934347358Z" level=info msg="shim disconnected" id=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.936059859Z" level=info msg="ignoring event" container=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.940959563Z" level=warning msg="cleaning up after shim disconnected" id=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.941144863Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.955137075Z" level=info msg="shim disconnected" id=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.956970576Z" level=warning msg="cleaning up after shim disconnected" id=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.957124776Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.958191377Z" level=info msg="ignoring event" container=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.995153306Z" level=info msg="ignoring event" container=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.995944407Z" level=info msg="shim disconnected" id=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.996343307Z" level=warning msg="cleaning up after shim disconnected" id=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.996476507Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.007098516Z" level=info msg="shim disconnected" id=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.007310516Z" level=info msg="ignoring event" container=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.008325717Z" level=warning msg="cleaning up after shim disconnected" id=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.008408817Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023889329Z" level=info msg="shim disconnected" id=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023959729Z" level=warning msg="cleaning up after shim disconnected" id=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023974429Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.024296530Z" level=info msg="ignoring event" container=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.025216330Z" level=info msg="ignoring event" container=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026434631Z" level=info msg="shim disconnected" id=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026543031Z" level=warning msg="cleaning up after shim disconnected" id=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026715932Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.058974857Z" level=info msg="ignoring event" container=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.058990157Z" level=info msg="shim disconnected" id=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059118457Z" level=warning msg="cleaning up after shim disconnected" id=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059132857Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.059284257Z" level=info msg="ignoring event" container=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059946558Z" level=info msg="shim disconnected" id=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.060011158Z" level=warning msg="cleaning up after shim disconnected" id=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.060024658Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086739979Z" level=info msg="shim disconnected" id=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086817279Z" level=warning msg="cleaning up after shim disconnected" id=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086832079Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.087179980Z" level=info msg="ignoring event" container=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.096147887Z" level=info msg="ignoring event" container=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.095968687Z" level=info msg="shim disconnected" id=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.097731888Z" level=warning msg="cleaning up after shim disconnected" id=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.097749988Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113794501Z" level=info msg="shim disconnected" id=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113900301Z" level=warning msg="cleaning up after shim disconnected" id=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113914701Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.127906412Z" level=info msg="ignoring event" container=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.128859913Z" level=info msg="ignoring event" container=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130188614Z" level=info msg="shim disconnected" id=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130290114Z" level=warning msg="cleaning up after shim disconnected" id=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130307214Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.283990536Z" level=warning msg="cleanup warnings time=\"2024-06-03T13:30:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:56.933255236Z" level=info msg="ignoring event" container=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.934473837Z" level=info msg="shim disconnected" id=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.940977743Z" level=warning msg="cleaning up after shim disconnected" id=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.941184043Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.869008356Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.918349827Z" level=info msg="shim disconnected" id=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.919174247Z" level=warning msg="cleaning up after shim disconnected" id=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.919316367Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.920732673Z" level=info msg="ignoring event" container=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.998239438Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999682548Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999835470Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:31:02 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999962288Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Consumed 12.832s CPU time.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:31:03 kubernetes-upgrade-776200 dockerd[5506]: time="2024-06-03T13:31:03.090539794Z" level=info msg="Starting up"
	Jun 03 13:32:03 kubernetes-upgrade-776200 dockerd[5506]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 06:32:03.217206    1152 out.go:239] * 
	* 
	W0603 06:32:03.219424    1152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 06:32:03.223426    1152 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-776200 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-03 06:32:03.6299856 -0700 PDT m=+10375.228836901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-776200 -n kubernetes-upgrade-776200
E0603 06:32:10.855858    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-776200 -n kubernetes-upgrade-776200: exit status 2 (11.7505984s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:32:03.720572    9776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-776200 logs -n 25
E0603 06:33:39.533829    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-776200 logs -n 25: (2m48.778631s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-485600 sudo cat                            | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl status docker --all                        |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl cat docker                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo cat                            | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /etc/docker/daemon.json                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo docker                         | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | system info                                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl status cri-docker                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl cat cri-docker                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo cat                            | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo cat                            | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | cri-dockerd --version                                |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl status containerd                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl cat containerd                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo cat                            | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo cat                            | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /etc/containerd/config.toml                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | containerd config dump                               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl status crio --all                          |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo                                | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo find                           | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |                   |         |                     |                     |
	| ssh     | -p cilium-485600 sudo crio                           | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | config                                               |                          |                   |         |                     |                     |
	| delete  | -p cilium-485600                                     | cilium-485600            | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT | 03 Jun 24 06:29 PDT |
	| start   | -p cert-options-878200                               | cert-options-878200      | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:29 PDT |                     |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                          |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                          |                   |         |                     |                     |
	|         | --apiserver-names=localhost                          |                          |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                          |                   |         |                     |                     |
	|         | --apiserver-port=8555                                |                          |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                          |                   |         |                     |                     |
	| ssh     | docker-flags-580600 ssh                              | docker-flags-580600      | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:30 PDT | 03 Jun 24 06:30 PDT |
	|         | sudo systemctl show docker                           |                          |                   |         |                     |                     |
	|         | --property=Environment                               |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | docker-flags-580600 ssh                              | docker-flags-580600      | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:30 PDT | 03 Jun 24 06:30 PDT |
	|         | sudo systemctl show docker                           |                          |                   |         |                     |                     |
	|         | --property=ExecStart                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| delete  | -p docker-flags-580600                               | docker-flags-580600      | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:30 PDT | 03 Jun 24 06:31 PDT |
	| start   | -p force-systemd-env-668100                          | force-systemd-env-668100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 06:31 PDT |                     |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                          |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                          |                   |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 06:31:31
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 06:31:31.390640   15328 out.go:291] Setting OutFile to fd 1708 ...
	I0603 06:31:31.390640   15328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 06:31:31.390640   15328 out.go:304] Setting ErrFile to fd 1592...
	I0603 06:31:31.390640   15328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 06:31:31.413892   15328 out.go:298] Setting JSON to false
	I0603 06:31:31.417604   15328 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10719,"bootTime":1717410772,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 06:31:31.417604   15328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 06:31:31.421631   15328 out.go:177] * [force-systemd-env-668100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 06:31:31.426059   15328 notify.go:220] Checking for updates...
	I0603 06:31:31.429148   15328 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 06:31:31.431848   15328 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 06:31:31.435249   15328 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 06:31:31.437986   15328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 06:31:31.441132   15328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0603 06:31:28.164157   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:30.359193   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:30.359193   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:30.359193   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:31:31.445364   15328 config.go:182] Loaded profile config "cert-options-878200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 06:31:31.446185   15328 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 06:31:31.447070   15328 config.go:182] Loaded profile config "kubernetes-upgrade-776200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 06:31:31.447827   15328 config.go:182] Loaded profile config "pause-079700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 06:31:31.448212   15328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 06:31:36.731563   15328 out.go:177] * Using the hyperv driver based on user configuration
	I0603 06:31:36.735140   15328 start.go:297] selected driver: hyperv
	I0603 06:31:36.735140   15328 start.go:901] validating driver "hyperv" against <nil>
	I0603 06:31:36.735685   15328 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 06:31:36.787800   15328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 06:31:36.789194   15328 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 06:31:36.789194   15328 cni.go:84] Creating CNI manager for ""
	I0603 06:31:36.789194   15328 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 06:31:36.789194   15328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 06:31:36.789194   15328 start.go:340] cluster config:
	{Name:force-systemd-env-668100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-668100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0603 06:31:36.789194   15328 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 06:31:36.792156   15328 out.go:177] * Starting "force-systemd-env-668100" primary control-plane node in "force-systemd-env-668100" cluster
	I0603 06:31:32.981851   12184 main.go:141] libmachine: [stdout =====>] : 
	I0603 06:31:32.981851   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:34.001682   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:36.616767   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:36.616767   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:36.616833   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:31:36.795475   15328 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 06:31:36.797396   15328 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 06:31:36.797459   15328 cache.go:56] Caching tarball of preloaded images
	I0603 06:31:36.797645   15328 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 06:31:36.797645   15328 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 06:31:36.797645   15328 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-env-668100\config.json ...
	I0603 06:31:36.797645   15328 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-env-668100\config.json: {Name:mk93e22f77c65e8a9fcf740075832b5e4ad3d6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 06:31:36.798883   15328 start.go:360] acquireMachinesLock for force-systemd-env-668100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 06:31:39.567549   12184 main.go:141] libmachine: [stdout =====>] : 
	I0603 06:31:39.567549   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:40.573184   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:42.691121   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:42.699405   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:42.699405   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:31:45.140286   12184 main.go:141] libmachine: [stdout =====>] : 
	I0603 06:31:45.140286   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:46.147491   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:48.296925   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:48.296925   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:48.296925   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:31:50.726364   12184 main.go:141] libmachine: [stdout =====>] : 172.17.90.126
	
	I0603 06:31:50.726364   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:50.738209   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:52.737156   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:52.737156   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:52.737156   12184 machine.go:94] provisionDockerMachine start ...
	I0603 06:31:52.748236   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:54.788702   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:54.788702   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:54.800618   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:31:57.206882   12184 main.go:141] libmachine: [stdout =====>] : 172.17.90.126
	
	I0603 06:31:57.206882   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:57.224265   12184 main.go:141] libmachine: Using SSH client type: native
	I0603 06:31:57.231838   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.126 22 <nil> <nil>}
	I0603 06:31:57.231838   12184 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 06:31:57.359453   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 06:31:57.359453   12184 buildroot.go:166] provisioning hostname "pause-079700"
	I0603 06:31:57.359453   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:31:59.375835   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:31:59.375835   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:31:59.387894   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:32:01.807239   12184 main.go:141] libmachine: [stdout =====>] : 172.17.90.126
	
	I0603 06:32:01.807239   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:32:01.824185   12184 main.go:141] libmachine: Using SSH client type: native
	I0603 06:32:01.824830   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.126 22 <nil> <nil>}
	I0603 06:32:01.824903   12184 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-079700 && echo "pause-079700" | sudo tee /etc/hostname
	I0603 06:32:01.971433   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-079700
	
	I0603 06:32:01.971602   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:32:03.124748    1152 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4238531s)
	I0603 06:32:03.140577    1152 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 06:32:03.209962    1152 out.go:177] 
	W0603 06:32:03.216280    1152 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 13:24:00 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.180810760Z" level=info msg="Starting up"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.182156477Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:00.183323292Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.216355906Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247305295Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247430097Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247516298Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.247534998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248169306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248270207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248482510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248587311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248612611Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.248626212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.249207819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.249991429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.253774676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.253894878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254202382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254320483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254850290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.254998492Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.255143693Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.257761026Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258103131Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258134531Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258153031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258170431Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258252232Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258638337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258818240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.258974942Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259000842Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259062543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259082843Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259098743Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259115843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259133444Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259157644Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259175544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259189344Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259213345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259229445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259244345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259260545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259275445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259290846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259305746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259321346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259337846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259371147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259465648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259496248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259512748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259532249Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259557549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259573149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259587649Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259721351Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259772352Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259791552Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259813652Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259827052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259847353Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.259878053Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260260358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260427460Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260510161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:00 kubernetes-upgrade-776200 dockerd[669]: time="2024-06-03T13:24:00.260534361Z" level=info msg="containerd successfully booted in 0.047270s"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.328887948Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.477835801Z" level=info msg="Loading containers: start."
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.847885834Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.938884436Z" level=info msg="Loading containers: done."
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.971905971Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:01 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:01.973412491Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:02 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:02 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:02.034149184Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:02 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:02.034237985Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.589479483Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:24:32 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.591330685Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.591949286Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.592097687Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:24:32 kubernetes-upgrade-776200 dockerd[663]: time="2024-06-03T13:24:32.592130287Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:24:33 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.681093141Z" level=info msg="Starting up"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.682122543Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:33.684026646Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1142
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.720582298Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.750985341Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751150541Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751315542Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751345242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751378842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751393642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751643942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751751842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751774342Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751794842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751825242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.751966643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755252147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755374947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755588448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755611948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755637248Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755655748Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755668648Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.755917848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756071048Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756094948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756112048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756130548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756184649Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756765049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756861150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756882250Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756897550Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756912950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756931350Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756948250Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756963850Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.756980150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757085050Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757146750Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757162350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757196850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757216950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757231250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757245950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757259450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757338650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757357950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757375850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757391750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757409050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757422550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757437150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757451150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757470450Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757554650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757578251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757591851Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757670051Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.757692451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758092051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758120551Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758135151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758150851Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758163151Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758703452Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758767952Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758926652Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:33 kubernetes-upgrade-776200 dockerd[1142]: time="2024-06-03T13:24:33.758992953Z" level=info msg="containerd successfully booted in 0.039266s"
	Jun 03 13:24:34 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:34.728950737Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:34 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:34.761903484Z" level=info msg="Loading containers: start."
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.103076772Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.197259006Z" level=info msg="Loading containers: done."
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.229566352Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.229746252Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.289127537Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:35 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:35.289321237Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:35 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:48 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.498262496Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.500572299Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501204800Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501355700Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:24:48 kubernetes-upgrade-776200 dockerd[1135]: time="2024-06-03T13:24:48.501419101Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:24:49 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.595082062Z" level=info msg="Starting up"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.596384764Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:49.597346665Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1554
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.637730523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.676912779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677085479Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677174079Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677210379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677265279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.677616480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678010480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678174581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678211681Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678237781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678350781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.678738981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.682462587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.682661587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683009487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683153688Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683207788Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683507788Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.683756289Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684155889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684448490Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684599990Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684654990Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684786790Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.684968890Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.685593591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.685817091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686021992Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686059692Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686092292Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686143692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686196192Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686233292Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686393192Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686532793Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686570393Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686717993Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.686953493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687094593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687131693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687164793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687195793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687415494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687548794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687587194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687619394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687653394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687681494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687709394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687739594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.687776094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688392695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688495295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688568795Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688701296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688773196Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.688934896Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689060696Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689124896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689184596Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.689242696Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.691523100Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.691811800Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.692121700Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 13:24:49 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:49.692241201Z" level=info msg="containerd successfully booted in 0.058680s"
	Jun 03 13:24:50 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:50.685250118Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 13:24:50 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:50.874707789Z" level=info msg="Loading containers: start."
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.160545714Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.244155025Z" level=info msg="Loading containers: done."
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.271378839Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.271606771Z" level=info msg="Daemon has completed initialization"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.328219701Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 13:24:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:24:51.328785980Z" level=info msg="API listen on [::]:2376"
	Jun 03 13:24:51 kubernetes-upgrade-776200 systemd[1]: Started Docker Application Container Engine.
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.696861197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697139223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697159925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.697320441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734481192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734705313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.734862328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.735231164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.745930686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746101602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746640254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.746849574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.780446585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784066131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784128336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:57 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:57.784879808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.314759797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315018720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315258342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.315565669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.340937345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.341239172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.341488094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.342108550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437221780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437765629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.437899141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.438266373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.439897020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.440752796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.443417635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:24:58 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:24:58.461736478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.462777629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.463081545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.463113547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.464804139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.730225496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731644573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731662074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:06 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:06.731769280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120179999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120514116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.120603721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:07 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:07.122784031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290035983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290350590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290554094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.290754398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419514220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419662923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.419680024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.420632145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.891759070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.892868394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.892984196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:10 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:10.894985440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449673647Z" level=info msg="shim disconnected" id=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449747149Z" level=warning msg="cleaning up after shim disconnected" id=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:12.449758849Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:12 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:12.451160778Z" level=info msg="ignoring event" container=35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.345167807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347038643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347059344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:14 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:14.347173446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:34.344995681Z" level=info msg="ignoring event" container=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.347511510Z" level=info msg="shim disconnected" id=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.348547621Z" level=warning msg="cleaning up after shim disconnected" id=3b0bcbdb9f20da34ad299d6e8a9b0db9e8068b8ccefc9c7e09ae141a44c6e545 namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.348858525Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:34.575614602Z" level=info msg="ignoring event" container=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.578265232Z" level=info msg="shim disconnected" id=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.578964540Z" level=warning msg="cleaning up after shim disconnected" id=ab293cde1da27bb24119173c49def2a55b4e8fd457ba41b028974998ae081aed namespace=moby
	Jun 03 13:25:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:34.579053041Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.386774975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.387381181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.387526383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.388843697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.884183244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.885592760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.885645060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.886332468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958527547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958742849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.958803750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:36 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:36.959056453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806501667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806702869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806720069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:37 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:37.806975871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:39.497215922Z" level=info msg="ignoring event" container=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.498182698Z" level=info msg="shim disconnected" id=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.498835682Z" level=warning msg="cleaning up after shim disconnected" id=de993546158d44b300398bec1ea98f6748bb05916d77f17da022856752a41385 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.499456666Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:39.655447197Z" level=info msg="ignoring event" container=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656355874Z" level=info msg="shim disconnected" id=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656590468Z" level=warning msg="cleaning up after shim disconnected" id=914190a57ba9086cc29ca3db2236eda1086c7ed904b67afa1952662bdf1068d6 namespace=moby
	Jun 03 13:25:39 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:39.656609268Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:25:42.771347105Z" level=info msg="ignoring event" container=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772471807Z" level=info msg="shim disconnected" id=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772831708Z" level=warning msg="cleaning up after shim disconnected" id=c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50 namespace=moby
	Jun 03 13:25:42 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:42.772947308Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899568913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899839613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.899872714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:25:53 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:25:53.900055514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:27:34.023667639Z" level=info msg="ignoring event" container=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024579841Z" level=info msg="shim disconnected" id=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024634141Z" level=warning msg="cleaning up after shim disconnected" id=cb73a7b4b9c6bf19e12ecdba0c1781d2f1501e93430d3208d248321cf829a57b namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.024645641Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243459870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243756671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243822071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:27:34 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:27:34.243969471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:29:24.035906170Z" level=info msg="ignoring event" container=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037135272Z" level=info msg="shim disconnected" id=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037679573Z" level=warning msg="cleaning up after shim disconnected" id=0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.037775373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.281538979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282024080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282421180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:29:24 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:29:24.282933281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:30:51 kubernetes-upgrade-776200 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.727254393Z" level=info msg="Processing signal 'terminated'"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.934347358Z" level=info msg="shim disconnected" id=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.936059859Z" level=info msg="ignoring event" container=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.940959563Z" level=warning msg="cleaning up after shim disconnected" id=dc8179ed2750ff743e36b743484acd1d8e8381626734892cdf628f475d7c2ccb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.941144863Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.955137075Z" level=info msg="shim disconnected" id=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.956970576Z" level=warning msg="cleaning up after shim disconnected" id=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.957124776Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.958191377Z" level=info msg="ignoring event" container=9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:51.995153306Z" level=info msg="ignoring event" container=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.995944407Z" level=info msg="shim disconnected" id=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.996343307Z" level=warning msg="cleaning up after shim disconnected" id=785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb namespace=moby
	Jun 03 13:30:51 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:51.996476507Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.007098516Z" level=info msg="shim disconnected" id=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.007310516Z" level=info msg="ignoring event" container=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.008325717Z" level=warning msg="cleaning up after shim disconnected" id=2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.008408817Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023889329Z" level=info msg="shim disconnected" id=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023959729Z" level=warning msg="cleaning up after shim disconnected" id=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.023974429Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.024296530Z" level=info msg="ignoring event" container=aabb6f27ed2cd6d4ac211bb568cd909a4b58b64408cd1337fd8e87709ff42af3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.025216330Z" level=info msg="ignoring event" container=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026434631Z" level=info msg="shim disconnected" id=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026543031Z" level=warning msg="cleaning up after shim disconnected" id=df89a8c4ba6e0462c69ba51bcabe6638c5bb301e3e015a37d07f145fecd5100d namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.026715932Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.058974857Z" level=info msg="ignoring event" container=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.058990157Z" level=info msg="shim disconnected" id=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059118457Z" level=warning msg="cleaning up after shim disconnected" id=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059132857Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.059284257Z" level=info msg="ignoring event" container=4f5efda33aaafefd58347742caf78244bf0192e379d497042be91e1085b48710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.059946558Z" level=info msg="shim disconnected" id=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.060011158Z" level=warning msg="cleaning up after shim disconnected" id=e6581d9a0b0e7f216cb291e55fd5af14a35ba8d05eaf7a1a8f22ea642a63f31e namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.060024658Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086739979Z" level=info msg="shim disconnected" id=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086817279Z" level=warning msg="cleaning up after shim disconnected" id=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.086832079Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.087179980Z" level=info msg="ignoring event" container=3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.096147887Z" level=info msg="ignoring event" container=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.095968687Z" level=info msg="shim disconnected" id=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.097731888Z" level=warning msg="cleaning up after shim disconnected" id=da1971b57e69343a0fd9052605d18a93116853a92fd607af0b87c993979cc498 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.097749988Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113794501Z" level=info msg="shim disconnected" id=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113900301Z" level=warning msg="cleaning up after shim disconnected" id=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.113914701Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.127906412Z" level=info msg="ignoring event" container=3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:52.128859913Z" level=info msg="ignoring event" container=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130188614Z" level=info msg="shim disconnected" id=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130290114Z" level=warning msg="cleaning up after shim disconnected" id=9cd83c3d3bbbd7b4eecfb8aeabf9f300c0b330e5a43d329e3ca3538042619cb0 namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.130307214Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:30:52 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:52.283990536Z" level=warning msg="cleanup warnings time=\"2024-06-03T13:30:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:30:56.933255236Z" level=info msg="ignoring event" container=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.934473837Z" level=info msg="shim disconnected" id=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.940977743Z" level=warning msg="cleaning up after shim disconnected" id=80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a namespace=moby
	Jun 03 13:30:56 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:30:56.941184043Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.869008356Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.918349827Z" level=info msg="shim disconnected" id=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.919174247Z" level=warning msg="cleaning up after shim disconnected" id=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1554]: time="2024-06-03T13:31:01.919316367Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.920732673Z" level=info msg="ignoring event" container=23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.998239438Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999682548Z" level=info msg="Daemon shutdown complete"
	Jun 03 13:31:01 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999835470Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 13:31:02 kubernetes-upgrade-776200 dockerd[1548]: time="2024-06-03T13:31:01.999962288Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Consumed 12.832s CPU time.
	Jun 03 13:31:03 kubernetes-upgrade-776200 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 13:31:03 kubernetes-upgrade-776200 dockerd[5506]: time="2024-06-03T13:31:03.090539794Z" level=info msg="Starting up"
	Jun 03 13:32:03 kubernetes-upgrade-776200 dockerd[5506]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 13:32:03 kubernetes-upgrade-776200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 06:32:03.217206    1152 out.go:239] * 
	W0603 06:32:03.219424    1152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 06:32:03.223426    1152 out.go:177] 
	I0603 06:32:04.139853   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:32:04.139853   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:32:04.154337   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:32:06.688293   12184 main.go:141] libmachine: [stdout =====>] : 172.17.90.126
	
	I0603 06:32:06.688293   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:32:06.705564   12184 main.go:141] libmachine: Using SSH client type: native
	I0603 06:32:06.706022   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8a4a0] 0xc8d080 <nil>  [] 0s} 172.17.90.126 22 <nil> <nil>}
	I0603 06:32:06.706022   12184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-079700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-079700/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-079700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 06:32:06.849592   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 06:32:06.849592   12184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0603 06:32:06.849592   12184 buildroot.go:174] setting up certificates
	I0603 06:32:06.849592   12184 provision.go:84] configureAuth start
	I0603 06:32:06.849592   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	I0603 06:32:08.984316   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 06:32:08.984316   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:32:08.984432   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-079700 ).networkadapters[0]).ipaddresses[0]
	I0603 06:32:11.511249   12184 main.go:141] libmachine: [stdout =====>] : 172.17.90.126
	
	I0603 06:32:11.511249   12184 main.go:141] libmachine: [stderr =====>] : 
	I0603 06:32:11.511459   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-079700 ).state
	
	
	==> Docker <==
	Jun 03 13:34:03 kubernetes-upgrade-776200 dockerd[6009]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0f77a9bc93b43567f6da851d60de378690a22bf72d9e3981e3ba448e9093443e'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '23bbc631a247c1a472180259b28252165042b155882ec8aae4880710f0fc433e'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3021b68d0576c60bf1451bc02fae9bce74e53b15476364f42c526f4b27c1e1b5'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID 'c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c031ea2e7e8859b96257e013a3c15287d8b4713d62868c855d8652a51feb7b50'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '80485ed8b88b33789257a0291bd9b8f74b3c5798be67a861b54231e669b9cd5a'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '35831cd763d8992f6d7954d959e24d1dfe3aa2fe73c4cc606747d9d2535174b0'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3be7d950a3cfde1877c5959390c1423be7ac81a26643cec8371beee8c9169aca'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9ed0b2f906e58dd76e7e85b538f1a53309e3a04afc17f10240504a3612ce31c8'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '785a8e9e2536fdede6e9426680f67926f7ac5c3840e5adbb8649e3f2e96b56fb'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error getting RW layer size for container ID '2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2f3e0c0808a533754ed6e12c07b1406113138b19f68308708aea78e3393665a2'"
	Jun 03 13:34:03 kubernetes-upgrade-776200 cri-dockerd[1357]: time="2024-06-03T13:34:03Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 13:34:03 kubernetes-upgrade-776200 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T13:34:03Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.113453] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.646332] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.248204] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +0.327987] systemd-fstab-generator[1127]: Ignoring "noauto" option for root device
	[  +3.093196] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +0.239181] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.228474] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.325215] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	[  +0.107106] kauditd_printk_skb: 183 callbacks suppressed
	[ +11.912901] systemd-fstab-generator[1540]: Ignoring "noauto" option for root device
	[  +0.135975] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.289534] systemd-fstab-generator[1769]: Ignoring "noauto" option for root device
	[  +4.545953] systemd-fstab-generator[1914]: Ignoring "noauto" option for root device
	[  +0.106600] kauditd_printk_skb: 73 callbacks suppressed
	[Jun 3 13:25] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.920723] kauditd_printk_skb: 31 callbacks suppressed
	[  +3.916257] systemd-fstab-generator[2875]: Ignoring "noauto" option for root device
	[ +17.969618] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.162681] kauditd_printk_skb: 40 callbacks suppressed
	[Jun 3 13:27] hrtimer: interrupt took 4452607 ns
	[Jun 3 13:30] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[  +0.691620] systemd-fstab-generator[5063]: Ignoring "noauto" option for root device
	[  +0.312043] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[  +0.310133] systemd-fstab-generator[5088]: Ignoring "noauto" option for root device
	[  +5.405217] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 13:35:04 up 12 min,  0 users,  load average: 0.00, 0.20, 0.21
	Linux kubernetes-upgrade-776200 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:34:56 kubernetes-upgrade-776200 kubelet[1921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:34:56 kubernetes-upgrade-776200 kubelet[1921]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:34:56 kubernetes-upgrade-776200 kubelet[1921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:34:56 kubernetes-upgrade-776200 kubelet[1921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:34:57 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:34:57.740417    1921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-776200?timeout=10s\": dial tcp 172.17.90.90:8443: connect: connection refused" interval="7s"
	Jun 03 13:34:59 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:34:59.468979    1921 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m7.756698636s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.904706    1921 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.904822    1921 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.904866    1921 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.904900    1921 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: I0603 13:35:03.904914    1921 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.904944    1921 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.904976    1921 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: I0603 13:35:03.904987    1921 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905014    1921 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905131    1921 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905149    1921 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905400    1921 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905450    1921 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905482    1921 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905660    1921 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.905688    1921 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.906437    1921 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.906849    1921 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:35:03 kubernetes-upgrade-776200 kubelet[1921]: E0603 13:35:03.907116    1921 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:32:15.489639    9476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 06:33:03.347464    9476 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:33:03.381762    9476 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:33:03.411654    9476 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:33:03.443980    9476 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:33:03.473728    9476 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:33:03.510857    9476 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:33:03.545643    9476 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 06:34:03.668974    9476 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-776200 -n kubernetes-upgrade-776200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-776200 -n kubernetes-upgrade-776200: exit status 2 (12.5196191s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:35:04.488805    9736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-776200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-776200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-776200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-776200: (1m5.4139013s)
--- FAIL: TestKubernetesUpgrade (1470.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-647400 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-647400 --driver=hyperv: exit status 1 (4m59.7849163s)

                                                
                                                
-- stdout --
	* [NoKubernetes-647400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-647400" primary control-plane node in "NoKubernetes-647400" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:07:34.243560    9692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-647400 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-647400 -n NoKubernetes-647400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-647400 -n NoKubernetes-647400: exit status 7 (3.4524196s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:12:33.999200    8432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 06:12:37.317669    8432 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-647400".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-647400 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-647400:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-647400" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.24s)

                                                
                                    

Test pass (155/200)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.61
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.34
9 TestDownloadOnly/v1.20.0/DeleteAll 1.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.21
12 TestDownloadOnly/v1.30.1/json-events 10.48
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.21
18 TestDownloadOnly/v1.30.1/DeleteAll 1.22
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.13
21 TestBinaryMirror 7.07
22 TestOffline 258.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
27 TestAddons/Setup 435.21
30 TestAddons/parallel/Ingress 68.77
31 TestAddons/parallel/InspektorGadget 26.48
32 TestAddons/parallel/MetricsServer 21.81
33 TestAddons/parallel/HelmTiller 29.45
35 TestAddons/parallel/CSI 102.24
36 TestAddons/parallel/Headlamp 35.96
37 TestAddons/parallel/CloudSpanner 21.49
38 TestAddons/parallel/LocalPath 86.36
39 TestAddons/parallel/NvidiaDevicePlugin 22.25
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 52.67
44 TestAddons/serial/GCPAuth/Namespaces 0.37
45 TestAddons/StoppedEnableDisable 53.65
46 TestCertOptions 493.89
47 TestCertExpiration 890.06
48 TestDockerFlags 360.5
49 TestForceSystemdFlag 391.71
57 TestErrorSpam/start 16.71
58 TestErrorSpam/status 35.54
59 TestErrorSpam/pause 21.97
60 TestErrorSpam/unpause 21.85
61 TestErrorSpam/stop 53.74
64 TestFunctional/serial/CopySyncFile 0.04
65 TestFunctional/serial/StartWithProxy 234.82
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 122.06
68 TestFunctional/serial/KubeContext 0.13
69 TestFunctional/serial/KubectlGetPods 0.21
72 TestFunctional/serial/CacheCmd/cache/add_remote 25.56
73 TestFunctional/serial/CacheCmd/cache/add_local 11.18
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
75 TestFunctional/serial/CacheCmd/cache/list 0.17
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 35.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.34
79 TestFunctional/serial/MinikubeKubectlCmd 0.41
81 TestFunctional/serial/ExtraConfig 125.38
82 TestFunctional/serial/ComponentHealth 0.18
83 TestFunctional/serial/LogsCmd 8.09
84 TestFunctional/serial/LogsFileCmd 10.32
85 TestFunctional/serial/InvalidService 20.81
91 TestFunctional/parallel/StatusCmd 41.63
95 TestFunctional/parallel/ServiceCmdConnect 27.79
96 TestFunctional/parallel/AddonsCmd 0.74
97 TestFunctional/parallel/PersistentVolumeClaim 39.81
99 TestFunctional/parallel/SSHCmd 20.63
100 TestFunctional/parallel/CpCmd 61.86
101 TestFunctional/parallel/MySQL 60.45
102 TestFunctional/parallel/FileSync 10.75
103 TestFunctional/parallel/CertSync 65.62
107 TestFunctional/parallel/NodeLabels 0.19
109 TestFunctional/parallel/NonActiveRuntimeDisabled 11.95
111 TestFunctional/parallel/License 3.2
112 TestFunctional/parallel/ServiceCmd/DeployApp 21.52
113 TestFunctional/parallel/Version/short 0.21
114 TestFunctional/parallel/Version/components 8.67
115 TestFunctional/parallel/ImageCommands/ImageListShort 7.46
116 TestFunctional/parallel/ImageCommands/ImageListTable 7.5
117 TestFunctional/parallel/ImageCommands/ImageListJson 7.41
118 TestFunctional/parallel/ImageCommands/ImageListYaml 7.42
119 TestFunctional/parallel/ImageCommands/ImageBuild 25.73
120 TestFunctional/parallel/ImageCommands/Setup 4.76
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.38
122 TestFunctional/parallel/ServiceCmd/List 13.97
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.84
124 TestFunctional/parallel/ServiceCmd/JSONOutput 13.82
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 29.47
127 TestFunctional/parallel/DockerEnv/powershell 45.6
129 TestFunctional/parallel/UpdateContextCmd/no_changes 3.33
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.59
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.57
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.62
134 TestFunctional/parallel/ImageCommands/ImageRemove 16.38
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18.36
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 12.45
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.08
139 TestFunctional/parallel/ProfileCmd/profile_not_create 12.58
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.6
143 TestFunctional/parallel/ProfileCmd/profile_list 10.89
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 10.73
151 TestFunctional/delete_addon-resizer_images 0.47
152 TestFunctional/delete_my-image_image 0.18
153 TestFunctional/delete_minikube_cached_images 0.19
157 TestMultiControlPlane/serial/StartCluster 714.6
158 TestMultiControlPlane/serial/DeployApp 13.24
160 TestMultiControlPlane/serial/AddWorkerNode 257.79
161 TestMultiControlPlane/serial/NodeLabels 0.18
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 28.82
163 TestMultiControlPlane/serial/CopyFile 633.68
164 TestMultiControlPlane/serial/StopSecondaryNode 73.49
168 TestImageBuild/serial/Setup 200.29
169 TestImageBuild/serial/NormalBuild 9.63
170 TestImageBuild/serial/BuildWithBuildArg 9.17
171 TestImageBuild/serial/BuildWithDockerIgnore 8.02
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.69
176 TestJSONOutput/start/Command 244.24
177 TestJSONOutput/start/Audit 0
179 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/pause/Command 7.7
183 TestJSONOutput/pause/Audit 0
185 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/unpause/Command 7.4
189 TestJSONOutput/unpause/Audit 0
191 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/stop/Command 38.77
195 TestJSONOutput/stop/Audit 0
197 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
199 TestErrorJSONOutput 1.35
204 TestMainNoArgs 0.18
205 TestMinikubeProfile 519.98
208 TestMountStart/serial/StartWithMountFirst 158.88
209 TestMountStart/serial/VerifyMountFirst 9.61
210 TestMountStart/serial/StartWithMountSecond 155.32
211 TestMountStart/serial/VerifyMountSecond 9.18
212 TestMountStart/serial/DeleteFirst 29.57
213 TestMountStart/serial/VerifyMountPostDelete 8.96
214 TestMountStart/serial/Stop 29.21
218 TestMultiNode/serial/FreshStart2Nodes 421.53
219 TestMultiNode/serial/DeployApp2Nodes 8.43
221 TestMultiNode/serial/AddNode 227.89
222 TestMultiNode/serial/MultiNodeLabels 0.18
223 TestMultiNode/serial/ProfileList 9.68
224 TestMultiNode/serial/CopyFile 353.83
225 TestMultiNode/serial/StopNode 74.32
226 TestMultiNode/serial/StartAfterStop 179.93
231 TestPreload 523.64
232 TestScheduledStopWindows 325.5
237 TestRunningBinaryUpgrade 1076.95
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.28
244 TestStoppedBinaryUpgrade/Setup 1.36
245 TestStoppedBinaryUpgrade/Upgrade 874.51
246 TestStoppedBinaryUpgrade/MinikubeLogs 10.48
x
+
TestDownloadOnly/v1.20.0/json-events (21.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-448100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-448100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (21.6082682s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-448100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-448100: exit status 85 (336.7449ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-448100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |          |
	|         | -p download-only-448100        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 03:39:08
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 03:39:08.540912    9032 out.go:291] Setting OutFile to fd 608 ...
	I0603 03:39:08.542557    9032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 03:39:08.542557    9032 out.go:304] Setting ErrFile to fd 612...
	I0603 03:39:08.542557    9032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 03:39:08.556888    9032 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0603 03:39:08.569457    9032 out.go:298] Setting JSON to true
	I0603 03:39:08.574299    9032 start.go:129] hostinfo: {"hostname":"minikube1","uptime":376,"bootTime":1717410772,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 03:39:08.574299    9032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 03:39:08.580575    9032 out.go:97] [download-only-448100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 03:39:08.580575    9032 notify.go:220] Checking for updates...
	I0603 03:39:08.584443    9032 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	W0603 03:39:08.580575    9032 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0603 03:39:08.587727    9032 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 03:39:08.590352    9032 out.go:169] MINIKUBE_LOCATION=19008
	I0603 03:39:08.592748    9032 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0603 03:39:08.597475    9032 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 03:39:08.598684    9032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 03:39:13.931066    9032 out.go:97] Using the hyperv driver based on user configuration
	I0603 03:39:13.931142    9032 start.go:297] selected driver: hyperv
	I0603 03:39:13.931142    9032 start.go:901] validating driver "hyperv" against <nil>
	I0603 03:39:13.931520    9032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 03:39:13.978151    9032 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0603 03:39:13.981429    9032 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 03:39:13.981429    9032 cni.go:84] Creating CNI manager for ""
	I0603 03:39:13.981429    9032 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0603 03:39:13.981429    9032 start.go:340] cluster config:
	{Name:download-only-448100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-448100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 03:39:13.983475    9032 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 03:39:13.986036    9032 out.go:97] Downloading VM boot image ...
	I0603 03:39:13.986613    9032 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 03:39:17.260759    9032 out.go:97] Starting "download-only-448100" primary control-plane node in "download-only-448100" cluster
	I0603 03:39:17.260759    9032 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0603 03:39:17.305572    9032 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0603 03:39:17.305572    9032 cache.go:56] Caching tarball of preloaded images
	I0603 03:39:17.306143    9032 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0603 03:39:17.309316    9032 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0603 03:39:17.309423    9032 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0603 03:39:17.373300    9032 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0603 03:39:24.804809    9032 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0603 03:39:24.813708    9032 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0603 03:39:25.863588    9032 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0603 03:39:25.872397    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-448100\config.json ...
	I0603 03:39:25.872397    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-448100\config.json: {Name:mkf920a42a9cf60d40a7427803bb4c6a60849b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:39:25.874164    9032 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0603 03:39:25.874573    9032 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-448100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-448100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 03:39:30.162790    8904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2334339s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-448100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-448100: (1.1987688s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (10.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-435800 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-435800 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (10.46186s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (10.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-435800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-435800: exit status 85 (203.6812ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-448100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | -p download-only-448100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| delete  | -p download-only-448100        | download-only-448100 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT | 03 Jun 24 03:39 PDT |
	| start   | -o=json --download-only        | download-only-435800 | minikube1\jenkins | v1.33.1 | 03 Jun 24 03:39 PDT |                     |
	|         | -p download-only-435800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 03:39:32
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 03:39:32.947113   14460 out.go:291] Setting OutFile to fd 560 ...
	I0603 03:39:32.947733   14460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 03:39:32.947733   14460 out.go:304] Setting ErrFile to fd 604...
	I0603 03:39:32.947733   14460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 03:39:32.973500   14460 out.go:298] Setting JSON to true
	I0603 03:39:32.982343   14460 start.go:129] hostinfo: {"hostname":"minikube1","uptime":400,"bootTime":1717410772,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 03:39:32.982343   14460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 03:39:33.086695   14460 out.go:97] [download-only-435800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 03:39:33.095294   14460 notify.go:220] Checking for updates...
	I0603 03:39:33.097900   14460 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 03:39:33.100219   14460 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 03:39:33.103419   14460 out.go:169] MINIKUBE_LOCATION=19008
	I0603 03:39:33.105759   14460 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0603 03:39:33.112618   14460 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 03:39:33.113986   14460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 03:39:38.636085   14460 out.go:97] Using the hyperv driver based on user configuration
	I0603 03:39:38.636302   14460 start.go:297] selected driver: hyperv
	I0603 03:39:38.636367   14460 start.go:901] validating driver "hyperv" against <nil>
	I0603 03:39:38.636367   14460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 03:39:38.688494   14460 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0603 03:39:38.690004   14460 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 03:39:38.690559   14460 cni.go:84] Creating CNI manager for ""
	I0603 03:39:38.690559   14460 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 03:39:38.690559   14460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 03:39:38.690660   14460 start.go:340] cluster config:
	{Name:download-only-435800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-435800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 03:39:38.690660   14460 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 03:39:38.694400   14460 out.go:97] Starting "download-only-435800" primary control-plane node in "download-only-435800" cluster
	I0603 03:39:38.694497   14460 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 03:39:38.735803   14460 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 03:39:38.735803   14460 cache.go:56] Caching tarball of preloaded images
	I0603 03:39:38.736234   14460 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 03:39:38.739088   14460 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0603 03:39:38.739244   14460 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0603 03:39:38.806215   14460 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 03:39:41.364567   14460 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0603 03:39:41.367291   14460 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0603 03:39:42.284386   14460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 03:39:42.292854   14460 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-435800\config.json ...
	I0603 03:39:42.293589   14460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-435800\config.json: {Name:mke97b325a8310e13d38706299c519af99852fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 03:39:42.294632   14460 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 03:39:42.295619   14460 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.30.1/kubectl.exe
	
	
	* The control-plane node download-only-435800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-435800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 03:39:43.420549    7796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2120034s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-435800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-435800: (1.1222833s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.13s)

                                                
                                    
x
+
TestBinaryMirror (7.07s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-143700 --alsologtostderr --binary-mirror http://127.0.0.1:56079 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-143700 --alsologtostderr --binary-mirror http://127.0.0.1:56079 --driver=hyperv: (6.247088s)
helpers_test.go:175: Cleaning up "binary-mirror-143700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-143700
--- PASS: TestBinaryMirror (7.07s)

                                                
                                    
x
+
TestOffline (258.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-647400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-647400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m32.5777914s)
helpers_test.go:175: Cleaning up "offline-docker-647400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-647400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-647400: (45.749515s)
--- PASS: TestOffline (258.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-402100
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-402100: exit status 85 (194.4141ms)

                                                
                                                
-- stdout --
	* Profile "addons-402100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-402100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 03:39:55.487698    4352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-402100
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-402100: exit status 85 (177.7703ms)

                                                
                                                
-- stdout --
	* Profile "addons-402100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-402100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 03:39:55.484681    8848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (435.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-402100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-402100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m15.2108442s)
--- PASS: TestAddons/Setup (435.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-402100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-402100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-402100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae6f778a-c835-4279-b185-fa420fec40a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae6f778a-c835-4279-b185-fa420fec40a0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0147403s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.4464635s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-402100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0603 03:48:00.990059    7568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-402100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 ip: (2.7236312s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.17.90.102
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable ingress-dns --alsologtostderr -v=1: (17.9290431s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable ingress --alsologtostderr -v=1: (22.5515117s)
--- PASS: TestAddons/parallel/Ingress (68.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6xgfd" [91e2469b-4e4e-4672-ab43-cc1e7ba8a485] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0206079s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-402100
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-402100: (21.4529318s)
--- PASS: TestAddons/parallel/InspektorGadget (26.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.5632ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wmghb" [12d052b6-5e05-4727-ad22-af68e7eac41f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014011s
addons_test.go:417: (dbg) Run:  kubectl --context addons-402100 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable metrics-server --alsologtostderr -v=1: (16.5462426s)
--- PASS: TestAddons/parallel/MetricsServer (21.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (29.45s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 22.0753ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-gjs64" [27e103ae-c8cb-4f7d-b6b7-e0e003b5f8cc] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0077169s
addons_test.go:475: (dbg) Run:  kubectl --context addons-402100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-402100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.1202477s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable helm-tiller --alsologtostderr -v=1: (17.2475357s)
--- PASS: TestAddons/parallel/HelmTiller (29.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (102.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 12.3843ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-402100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-402100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [36d619c7-b177-4974-ba39-9404cdb66f1b] Pending
helpers_test.go:344: "task-pv-pod" [36d619c7-b177-4974-ba39-9404cdb66f1b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [36d619c7-b177-4974-ba39-9404cdb66f1b] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.0202442s
addons_test.go:586: (dbg) Run:  kubectl --context addons-402100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-402100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-402100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-402100 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-402100 delete pod task-pv-pod: (1.2601525s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-402100 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-402100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-402100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [250f1a3d-cfbd-46d4-9ec1-aaa39816e9cd] Pending
helpers_test.go:344: "task-pv-pod-restore" [250f1a3d-cfbd-46d4-9ec1-aaa39816e9cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [250f1a3d-cfbd-46d4-9ec1-aaa39816e9cd] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0116472s
addons_test.go:628: (dbg) Run:  kubectl --context addons-402100 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-402100 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-402100 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.6360699s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable volumesnapshots --alsologtostderr -v=1: (14.6020219s)
--- PASS: TestAddons/parallel/CSI (102.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-402100 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-402100 --alsologtostderr -v=1: (16.9338385s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-rr7gl" [97e5d84e-6ff9-46b9-8d77-5dd42612a877] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-rr7gl" [97e5d84e-6ff9-46b9-8d77-5dd42612a877] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0203926s
--- PASS: TestAddons/parallel/Headlamp (35.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-hllxv" [fcb60fff-2709-465d-9bda-d1f869ed5d34] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0052108s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-402100
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-402100: (15.4798672s)
--- PASS: TestAddons/parallel/CloudSpanner (21.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (86.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-402100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-402100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7190b782-2ad6-4f7a-b4d4-412d51139378] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7190b782-2ad6-4f7a-b4d4-412d51139378] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7190b782-2ad6-4f7a-b4d4-412d51139378] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0162852s
addons_test.go:992: (dbg) Run:  kubectl --context addons-402100 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 ssh "cat /opt/local-path-provisioner/pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 ssh "cat /opt/local-path-provisioner/pvc-670232d8-e54e-427b-9a5d-e0e6bc60bbec_default_test-pvc/file1": (11.1675332s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-402100 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-402100 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.7023771s)
--- PASS: TestAddons/parallel/LocalPath (86.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wq5gk" [d4389b52-e6e3-4329-b22e-44f72dbfe971] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0161572s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-402100
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-402100: (16.2011021s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-ddwns" [ae31544e-a182-4a32-ac4f-c780f2361bd1] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0170092s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (52.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 13.3564ms
addons_test.go:905: volcano-controller stabilized in 13.5538ms
addons_test.go:897: volcano-admission stabilized in 14.7475ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-p9clm" [6ad6ba77-ee9a-4b8c-844a-892e06f7464d] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.0196456s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-qb5qk" [5e88c724-20f9-46a6-89ef-c1bf26818202] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.024761s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-c9mc8" [493a4112-dfcb-47dd-a7d5-00089d54c3be] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0204986s
addons_test.go:924: (dbg) Run:  kubectl --context addons-402100 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-402100 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-402100 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c62c2a41-e1a9-44f1-b89d-a40bcaefc492] Pending
helpers_test.go:344: "test-job-nginx-0" [c62c2a41-e1a9-44f1-b89d-a40bcaefc492] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c62c2a41-e1a9-44f1-b89d-a40bcaefc492] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 11.0088345s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-402100 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-402100 addons disable volcano --alsologtostderr -v=1: (24.7936109s)
--- PASS: TestAddons/parallel/Volcano (52.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-402100 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-402100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (53.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-402100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-402100: (41.0219252s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-402100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-402100: (5.0770588s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-402100
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-402100: (4.7512213s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-402100
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-402100: (2.7931909s)
--- PASS: TestAddons/StoppedEnableDisable (53.65s)

                                                
                                    
x
+
TestCertOptions (493.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-878200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-878200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m7.2987768s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-878200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-878200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.6618194s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-878200 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-878200 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-878200 -- "sudo cat /etc/kubernetes/admin.conf": (10.4176912s)
helpers_test.go:175: Cleaning up "cert-options-878200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-878200
E0603 06:37:10.861231    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-878200: (45.3343528s)
--- PASS: TestCertOptions (493.89s)

                                                
                                    
x
+
TestCertExpiration (890.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-928400 --memory=2048 --cert-expiration=3m --driver=hyperv
E0603 06:15:02.764407    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-928400 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m25.0297506s)
E0603 06:23:39.529338    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-928400 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-928400 --memory=2048 --cert-expiration=8760h --driver=hyperv: (2m36.6873644s)
helpers_test.go:175: Cleaning up "cert-expiration-928400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-928400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-928400: (48.3290728s)
--- PASS: TestCertExpiration (890.06s)

                                                
                                    
x
+
TestDockerFlags (360.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-580600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-580600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (4m53.4845865s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-580600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-580600 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.6828487s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-580600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-580600 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.6660747s)
helpers_test.go:175: Cleaning up "docker-flags-580600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-580600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-580600: (46.6621735s)
--- PASS: TestDockerFlags (360.50s)

                                                
                                    
x
+
TestForceSystemdFlag (391.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-647400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-647400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m41.7308575s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-647400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-647400 ssh "docker info --format {{.CgroupDriver}}": (10.1860622s)
helpers_test.go:175: Cleaning up "force-systemd-flag-647400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-647400
E0603 06:13:39.515304    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-647400: (39.7906794s)
--- PASS: TestForceSystemdFlag (391.71s)

                                                
                                    
x
+
TestErrorSpam/start (16.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 start --dry-run: (5.5002271s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 start --dry-run: (5.7201841s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 start --dry-run: (5.4648942s)
--- PASS: TestErrorSpam/start (16.71s)

                                                
                                    
x
+
TestErrorSpam/status (35.54s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 status: (12.3083211s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 status: (11.5786976s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 status: (11.6342064s)
--- PASS: TestErrorSpam/status (35.54s)

                                                
                                    
x
+
TestErrorSpam/pause (21.97s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 pause: (7.6972858s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 pause: (7.1544169s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 pause: (7.0962355s)
--- PASS: TestErrorSpam/pause (21.97s)

                                                
                                    
x
+
TestErrorSpam/unpause (21.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 unpause: (7.3079617s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 unpause: (7.2336749s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 unpause: (7.2863142s)
--- PASS: TestErrorSpam/unpause (21.85s)

                                                
                                    
x
+
TestErrorSpam/stop (53.74s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 stop
E0603 03:57:10.827040    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 stop: (32.8233799s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 stop: (10.6714135s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 stop
E0603 03:57:38.633304    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-197000 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-197000 stop: (10.2226292s)
--- PASS: TestErrorSpam/stop (53.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\7364\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (234.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-754300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-754300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m54.8060122s)
--- PASS: TestFunctional/serial/StartWithProxy (234.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (122.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-754300 --alsologtostderr -v=8
E0603 04:02:10.831233    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-754300 --alsologtostderr -v=8: (2m2.0590568s)
functional_test.go:659: soft start took 2m2.060743s for "functional-754300" cluster.
--- PASS: TestFunctional/serial/SoftStart (122.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-754300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (25.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cache add registry.k8s.io/pause:3.1: (8.5564396s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cache add registry.k8s.io/pause:3.3: (8.4198691s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cache add registry.k8s.io/pause:latest: (8.5877785s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (25.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-754300 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2786453663\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-754300 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2786453663\001: (2.273144s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cache add minikube-local-cache-test:functional-754300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cache add minikube-local-cache-test:functional-754300: (8.4985105s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cache delete minikube-local-cache-test:functional-754300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-754300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh sudo crictl images: (9.059886s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.0791518s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.0522841s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:04:55.102358   13000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cache reload: (7.8792963s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.0371193s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.34s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 kubectl -- --context functional-754300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (125.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-754300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0603 04:07:10.840846    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-754300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m5.3695131s)
functional_test.go:757: restart took 2m5.3800921s for "functional-754300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (125.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-754300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 logs: (8.0890353s)
--- PASS: TestFunctional/serial/LogsCmd (8.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd489397504\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd489397504\001\logs.txt: (10.3126001s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-754300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-754300
E0603 04:08:34.002919    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-754300: exit status 115 (16.1912023s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.17.94.139:32401 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:08:21.651981    4728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-754300 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-754300 delete -f testdata\invalidsvc.yaml: (1.2149077s)
--- PASS: TestFunctional/serial/InvalidService (20.81s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 status: (15.0773905s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.4724443s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 status -o json: (13.0811976s)
--- PASS: TestFunctional/parallel/StatusCmd (41.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-754300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-754300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2dggb" [37df28f8-5506-4714-824a-d6bb81fc3f37] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2dggb" [37df28f8-5506-4714-824a-d6bb81fc3f37] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0141413s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 service hello-node-connect --url: (19.2051878s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.17.94.139:31801
functional_test.go:1671: http://172.17.94.139:31801: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2dggb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.17.94.139:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.17.94.139:31801
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b33ccee1-44e1-4a45-b3bd-001b1944c26c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0120861s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-754300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-754300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-754300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-754300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [41d49300-fc48-4b38-937d-380ae7fdc4fd] Pending
helpers_test.go:344: "sp-pod" [41d49300-fc48-4b38-937d-380ae7fdc4fd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [41d49300-fc48-4b38-937d-380ae7fdc4fd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0119079s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-754300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-754300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-754300 delete -f testdata/storage-provisioner/pod.yaml: (2.1954791s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-754300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9b71b7a4-d933-4aa2-ab9e-c4ef399ee8aa] Pending
helpers_test.go:344: "sp-pod" [9b71b7a4-d933-4aa2-ab9e-c4ef399ee8aa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9b71b7a4-d933-4aa2-ab9e-c4ef399ee8aa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0365381s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-754300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "echo hello": (10.4841457s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "cat /etc/hostname": (9.961802s)
--- PASS: TestFunctional/parallel/SSHCmd (20.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (61.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.0468644s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh -n functional-754300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh -n functional-754300 "sudo cat /home/docker/cp-test.txt": (10.6771953s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cp functional-754300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2200497280\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cp functional-754300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2200497280\001\cp-test.txt: (10.5850625s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh -n functional-754300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh -n functional-754300 "sudo cat /home/docker/cp-test.txt": (11.2133663s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.329892s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh -n functional-754300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh -n functional-754300 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.9664594s)
--- PASS: TestFunctional/parallel/CpCmd (61.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (60.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-754300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-bkgpd" [b9126cfa-7fe1-41bc-a800-1738c150d0f6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-bkgpd" [b9126cfa-7fe1-41bc-a800-1738c150d0f6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 49.0072716s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;": exit status 1 (287.2606ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;": exit status 1 (273.2ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;": exit status 1 (286.4485ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;": exit status 1 (310.7424ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0603 04:12:10.839522    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
functional_test.go:1803: (dbg) Run:  kubectl --context functional-754300 exec mysql-64454c8b5c-bkgpd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (60.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7364/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/test/nested/copy/7364/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/test/nested/copy/7364/hosts": (10.7459851s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (65.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7364.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/7364.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/7364.pem": (9.9996533s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7364.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /usr/share/ca-certificates/7364.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /usr/share/ca-certificates/7364.pem": (10.8316675s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.9875556s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/73642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/73642.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/73642.pem": (11.0532312s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/73642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /usr/share/ca-certificates/73642.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /usr/share/ca-certificates/73642.pem": (11.9109246s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.8275572s)
--- PASS: TestFunctional/parallel/CertSync (65.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-754300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 ssh "sudo systemctl is-active crio": exit status 1 (11.9433639s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:08:40.302423    8592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.95s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.1848954s)
--- PASS: TestFunctional/parallel/License (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-754300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-754300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-7kj6x" [2379e5e7-12d8-4b85-84d6-2366b9291d0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-7kj6x" [2379e5e7-12d8-4b85-84d6-2366b9291d0b] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.0174551s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 version -o=json --components: (8.6699645s)
--- PASS: TestFunctional/parallel/Version/components (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls --format short --alsologtostderr: (7.452246s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-754300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-754300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-754300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-754300 image ls --format short --alsologtostderr:
W0603 04:11:39.879077    7304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 04:11:39.887970    7304 out.go:291] Setting OutFile to fd 1296 ...
I0603 04:11:39.888367    7304 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:39.888906    7304 out.go:304] Setting ErrFile to fd 1292...
I0603 04:11:39.888906    7304 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:39.906696    7304 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:39.907241    7304 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:39.907627    7304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:42.298655    7304 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:42.298886    7304 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:42.313954    7304 ssh_runner.go:195] Run: systemctl --version
I0603 04:11:42.313954    7304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:44.496474    7304 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:44.496474    7304 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:44.500995    7304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
I0603 04:11:47.045228    7304 main.go:141] libmachine: [stdout =====>] : 172.17.94.139

                                                
                                                
I0603 04:11:47.045228    7304 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:47.045228    7304 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
I0603 04:11:47.151496    7304 ssh_runner.go:235] Completed: systemctl --version: (4.8375338s)
I0603 04:11:47.163992    7304 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls --format table --alsologtostderr: (7.4949745s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-754300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-754300 | f5acaa6d55ff2 | 30B    |
| docker.io/library/nginx                     | latest            | 4f67c83422ec7 | 188MB  |
| docker.io/library/nginx                     | alpine            | 70ea0d8cc5300 | 48.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-754300 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-754300 image ls --format table --alsologtostderr:
W0603 04:11:47.331812    9720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 04:11:47.338480    9720 out.go:291] Setting OutFile to fd 1300 ...
I0603 04:11:47.338480    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:47.338480    9720 out.go:304] Setting ErrFile to fd 1228...
I0603 04:11:47.338480    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:47.349029    9720 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:47.355489    9720 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:47.356252    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:49.527082    9720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:49.527082    9720 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:49.542265    9720 ssh_runner.go:195] Run: systemctl --version
I0603 04:11:49.542317    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:51.790964    9720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:51.790964    9720 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:51.791727    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
I0603 04:11:54.378469    9720 main.go:141] libmachine: [stdout =====>] : 172.17.94.139

                                                
                                                
I0603 04:11:54.378469    9720 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:54.378631    9720 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
I0603 04:11:54.489186    9720 ssh_runner.go:235] Completed: systemctl --version: (4.9468607s)
I0603 04:11:54.498917    9720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls --format json --alsologtostderr: (7.3988873s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-754300 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-754300"],"size":"32900000"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d76
94bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/
etcd:3.5.12-0"],"size":"149000000"},{"id":"f5acaa6d55ff26671afca98a86bb30dd7ee78e361db6d252aad4b6da567f45ee","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-754300"],"size":"30"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-754300 image ls --format json --alsologtostderr:
W0603 04:11:40.897409    3092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 04:11:40.910505    3092 out.go:291] Setting OutFile to fd 1232 ...
I0603 04:11:40.926507    3092 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:40.926507    3092 out.go:304] Setting ErrFile to fd 1152...
I0603 04:11:40.926507    3092 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:40.950442    3092 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:40.950979    3092 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:40.951935    3092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:43.241988    3092 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:43.241988    3092 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:43.257086    3092 ssh_runner.go:195] Run: systemctl --version
I0603 04:11:43.257086    3092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:45.391138    3092 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:45.391138    3092 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:45.391138    3092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
I0603 04:11:47.988139    3092 main.go:141] libmachine: [stdout =====>] : 172.17.94.139

                                                
                                                
I0603 04:11:47.988139    3092 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:47.988139    3092 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
I0603 04:11:48.096400    3092 ssh_runner.go:235] Completed: systemctl --version: (4.8392632s)
I0603 04:11:48.106770    3092 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls --format yaml --alsologtostderr: (7.4162698s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-754300 image ls --format yaml --alsologtostderr:
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-754300
size: "32900000"
- id: 70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f5acaa6d55ff26671afca98a86bb30dd7ee78e361db6d252aad4b6da567f45ee
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-754300
size: "30"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-754300 image ls --format yaml --alsologtostderr:
W0603 04:11:48.269339    9724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 04:11:48.274526    9724 out.go:291] Setting OutFile to fd 1164 ...
I0603 04:11:48.278019    9724 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:48.278019    9724 out.go:304] Setting ErrFile to fd 876...
I0603 04:11:48.278019    9724 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:11:48.299081    9724 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:48.299492    9724 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:11:48.300330    9724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:50.507770    9724 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:50.507770    9724 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:50.525369    9724 ssh_runner.go:195] Run: systemctl --version
I0603 04:11:50.525369    9724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:11:52.718205    9724 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:11:52.718205    9724 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:52.718420    9724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
I0603 04:11:55.411367    9724 main.go:141] libmachine: [stdout =====>] : 172.17.94.139

                                                
                                                
I0603 04:11:55.411475    9724 main.go:141] libmachine: [stderr =====>] : 
I0603 04:11:55.411475    9724 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
I0603 04:11:55.517134    9724 ssh_runner.go:235] Completed: systemctl --version: (4.9917567s)
I0603 04:11:55.526918    9724 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (25.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-754300 ssh pgrep buildkitd: exit status 1 (9.2788748s)

                                                
                                                
** stderr ** 
	W0603 04:11:54.851061    8428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image build -t localhost/my-image:functional-754300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image build -t localhost/my-image:functional-754300 testdata\build --alsologtostderr: (9.4551415s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-754300 image build -t localhost/my-image:functional-754300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 4524666b832e
---> Removed intermediate container 4524666b832e
---> 1bb9cb0c16ca
Step 3/3 : ADD content.txt /
---> e2dfeb481b6a
Successfully built e2dfeb481b6a
Successfully tagged localhost/my-image:functional-754300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-754300 image build -t localhost/my-image:functional-754300 testdata\build --alsologtostderr:
W0603 04:12:04.114551   10600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 04:12:04.119566   10600 out.go:291] Setting OutFile to fd 1296 ...
I0603 04:12:04.136401   10600 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:12:04.136461   10600 out.go:304] Setting ErrFile to fd 1096...
I0603 04:12:04.136461   10600 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 04:12:04.148449   10600 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:12:04.166420   10600 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 04:12:04.167420   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:12:06.298908   10600 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:12:06.298908   10600 main.go:141] libmachine: [stderr =====>] : 
I0603 04:12:06.312156   10600 ssh_runner.go:195] Run: systemctl --version
I0603 04:12:06.312156   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-754300 ).state
I0603 04:12:08.471971   10600 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 04:12:08.483624   10600 main.go:141] libmachine: [stderr =====>] : 
I0603 04:12:08.483624   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-754300 ).networkadapters[0]).ipaddresses[0]
I0603 04:12:10.999092   10600 main.go:141] libmachine: [stdout =====>] : 172.17.94.139

                                                
                                                
I0603 04:12:10.999092   10600 main.go:141] libmachine: [stderr =====>] : 
I0603 04:12:10.999655   10600 sshutil.go:53] new ssh client: &{IP:172.17.94.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-754300\id_rsa Username:docker}
I0603 04:12:11.097614   10600 ssh_runner.go:235] Completed: systemctl --version: (4.7854501s)
I0603 04:12:11.097614   10600 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3024802440.tar
I0603 04:12:11.112930   10600 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0603 04:12:11.144404   10600 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3024802440.tar
I0603 04:12:11.151832   10600 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3024802440.tar: stat -c "%s %y" /var/lib/minikube/build/build.3024802440.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3024802440.tar': No such file or directory
I0603 04:12:11.151832   10600 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3024802440.tar --> /var/lib/minikube/build/build.3024802440.tar (3072 bytes)
I0603 04:12:11.209156   10600 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3024802440
I0603 04:12:11.239417   10600 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3024802440 -xf /var/lib/minikube/build/build.3024802440.tar
I0603 04:12:11.256995   10600 docker.go:360] Building image: /var/lib/minikube/build/build.3024802440
I0603 04:12:11.267933   10600 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-754300 /var/lib/minikube/build/build.3024802440
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0603 04:12:13.382203   10600 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-754300 /var/lib/minikube/build/build.3024802440: (2.1141934s)
I0603 04:12:13.394134   10600 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3024802440
I0603 04:12:13.427922   10600 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3024802440.tar
I0603 04:12:13.446939   10600 build_images.go:217] Built localhost/my-image:functional-754300 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3024802440.tar
I0603 04:12:13.446939   10600 build_images.go:133] succeeded building to: functional-754300
I0603 04:12:13.446939   10600 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls: (6.9734599s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (25.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.440928s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-754300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image load --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image load --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr: (16.1553246s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls: (8.2154478s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 service list: (13.9683647s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image load --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image load --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr: (13.3553482s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls: (8.461257s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 service list -o json: (13.8152453s)
functional_test.go:1490: Took "13.8153634s" to run "out/minikube-windows-amd64.exe -p functional-754300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.1494889s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-754300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image load --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image load --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr: (16.4821704s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls: (8.5316255s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.47s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (45.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-754300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-754300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-754300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-754300": (29.8667973s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-754300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-754300 docker-env | Invoke-Expression ; docker images": (15.7134243s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (45.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 update-context --alsologtostderr -v=2: (3.3266913s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 update-context --alsologtostderr -v=2: (2.5917314s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 update-context --alsologtostderr -v=2: (2.5685021s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image save gcr.io/google-containers/addon-resizer:functional-754300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image save gcr.io/google-containers/addon-resizer:functional-754300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.6107087s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image rm gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image rm gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr: (8.5676905s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls: (7.7959024s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.6119319s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image ls: (7.7289525s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-754300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-754300 image save --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-754300 image save --daemon gcr.io/google-containers/addon-resizer:functional-754300 --alsologtostderr: (11.9422634s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-754300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-754300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-754300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-754300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9888: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 1700: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-754300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.1233346s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-754300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-754300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [062ff70e-6124-452f-b3b0-c8097303114e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [062ff70e-6124-452f-b3b0-c8097303114e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.0164618s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.6809865s)
functional_test.go:1311: Took "10.6886382s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "197.0217ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-754300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3520: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.5443928s)
functional_test.go:1362: Took "10.5443928s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "178.5206ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.73s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.47s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-754300
--- PASS: TestFunctional/delete_addon-resizer_images (0.47s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-754300
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-754300
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (714.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-528700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0603 04:18:39.493832    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:39.508959    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:39.524536    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:39.556241    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:39.604057    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:39.698561    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:39.873171    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:40.207898    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:40.854765    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:42.136446    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:44.707241    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:18:49.841218    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:19:00.082849    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:19:20.569687    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:20:01.539943    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:21:23.462389    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:22:10.832495    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:23:39.493071    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:24:07.318387    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:25:14.017043    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:27:10.835239    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:28:39.505150    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-528700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m17.4465358s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 status -v=7 --alsologtostderr: (37.151078s)
--- PASS: TestMultiControlPlane/serial/StartCluster (714.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-528700 -- rollout status deployment/busybox: (5.4743262s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- nslookup kubernetes.io: (1.744928s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-hd7gx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- nslookup kubernetes.io: (1.5244548s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-hd7gx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-bz4xm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-hd7gx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-528700 -- exec busybox-fc5497c4f-np7rl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (257.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-528700 -v=7 --alsologtostderr
E0603 04:32:10.841781    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:33:39.508637    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-528700 -v=7 --alsologtostderr: (3m29.6587231s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 status -v=7 --alsologtostderr
E0603 04:35:02.685345    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 status -v=7 --alsologtostderr: (48.116338s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (257.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-528700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (28.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (28.8215311s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (28.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (633.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 status --output json -v=7 --alsologtostderr: (48.7643724s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700:/home/docker/cp-test.txt: (9.4237793s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt": (9.4074861s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700.txt: (9.3936749s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt": (9.3910121s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt ha-528700-m02:/home/docker/cp-test_ha-528700_ha-528700-m02.txt
E0603 04:37:10.844723    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt ha-528700-m02:/home/docker/cp-test_ha-528700_ha-528700-m02.txt: (16.6526819s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt": (9.4616985s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test_ha-528700_ha-528700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test_ha-528700_ha-528700-m02.txt": (9.2958164s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt ha-528700-m03:/home/docker/cp-test_ha-528700_ha-528700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt ha-528700-m03:/home/docker/cp-test_ha-528700_ha-528700-m03.txt: (16.3323818s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt": (9.537725s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test_ha-528700_ha-528700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test_ha-528700_ha-528700-m03.txt": (9.4197559s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt ha-528700-m04:/home/docker/cp-test_ha-528700_ha-528700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700:/home/docker/cp-test.txt ha-528700-m04:/home/docker/cp-test_ha-528700_ha-528700-m04.txt: (16.4781259s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt"
E0603 04:38:39.497988    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test.txt": (9.3204837s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test_ha-528700_ha-528700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test_ha-528700_ha-528700-m04.txt": (9.307383s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700-m02:/home/docker/cp-test.txt: (9.5386893s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt": (9.702901s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m02.txt: (9.7222801s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt": (9.7416166s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt ha-528700:/home/docker/cp-test_ha-528700-m02_ha-528700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt ha-528700:/home/docker/cp-test_ha-528700-m02_ha-528700.txt: (16.9206281s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt": (9.612833s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test_ha-528700-m02_ha-528700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test_ha-528700-m02_ha-528700.txt": (9.8148869s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt ha-528700-m03:/home/docker/cp-test_ha-528700-m02_ha-528700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt ha-528700-m03:/home/docker/cp-test_ha-528700-m02_ha-528700-m03.txt: (16.9601468s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt": (9.7666765s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test_ha-528700-m02_ha-528700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test_ha-528700-m02_ha-528700-m03.txt": (9.6768659s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt ha-528700-m04:/home/docker/cp-test_ha-528700-m02_ha-528700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m02:/home/docker/cp-test.txt ha-528700-m04:/home/docker/cp-test_ha-528700-m02_ha-528700-m04.txt: (17.1450089s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test.txt": (9.7909648s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test_ha-528700-m02_ha-528700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test_ha-528700-m02_ha-528700-m04.txt": (9.8322131s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700-m03:/home/docker/cp-test.txt: (9.6859169s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt": (9.5866011s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m03.txt: (9.6403407s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt"
E0603 04:41:54.031030    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt": (9.6764366s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt ha-528700:/home/docker/cp-test_ha-528700-m03_ha-528700.txt
E0603 04:42:10.831461    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt ha-528700:/home/docker/cp-test_ha-528700-m03_ha-528700.txt: (16.9633187s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt": (9.713222s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test_ha-528700-m03_ha-528700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test_ha-528700-m03_ha-528700.txt": (9.7490208s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt ha-528700-m02:/home/docker/cp-test_ha-528700-m03_ha-528700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt ha-528700-m02:/home/docker/cp-test_ha-528700-m03_ha-528700-m02.txt: (16.9129151s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt": (9.646931s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test_ha-528700-m03_ha-528700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test_ha-528700-m03_ha-528700-m02.txt": (9.7641337s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt ha-528700-m04:/home/docker/cp-test_ha-528700-m03_ha-528700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m03:/home/docker/cp-test.txt ha-528700-m04:/home/docker/cp-test_ha-528700-m03_ha-528700-m04.txt: (17.2055823s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test.txt": (9.8745272s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test_ha-528700-m03_ha-528700-m04.txt"
E0603 04:43:39.503947    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test_ha-528700-m03_ha-528700-m04.txt": (9.7857359s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp testdata\cp-test.txt ha-528700-m04:/home/docker/cp-test.txt: (9.7999896s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt": (9.510234s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2800057214\001\cp-test_ha-528700-m04.txt: (9.7125939s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt": (9.567891s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt ha-528700:/home/docker/cp-test_ha-528700-m04_ha-528700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt ha-528700:/home/docker/cp-test_ha-528700-m04_ha-528700.txt: (16.5174791s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt": (9.4320871s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test_ha-528700-m04_ha-528700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700 "sudo cat /home/docker/cp-test_ha-528700-m04_ha-528700.txt": (9.4615597s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt ha-528700-m02:/home/docker/cp-test_ha-528700-m04_ha-528700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt ha-528700-m02:/home/docker/cp-test_ha-528700-m04_ha-528700-m02.txt: (16.5801671s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt": (9.5011413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test_ha-528700-m04_ha-528700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m02 "sudo cat /home/docker/cp-test_ha-528700-m04_ha-528700-m02.txt": (9.4272608s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt ha-528700-m03:/home/docker/cp-test_ha-528700-m04_ha-528700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 cp ha-528700-m04:/home/docker/cp-test.txt ha-528700-m03:/home/docker/cp-test_ha-528700-m04_ha-528700-m03.txt: (16.3505676s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m04 "sudo cat /home/docker/cp-test.txt": (9.5039635s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test_ha-528700-m04_ha-528700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 ssh -n ha-528700-m03 "sudo cat /home/docker/cp-test_ha-528700-m04_ha-528700-m03.txt": (9.5368465s)
--- PASS: TestMultiControlPlane/serial/CopyFile (633.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (73.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-528700 node stop m02 -v=7 --alsologtostderr: (35.4662363s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-528700 status -v=7 --alsologtostderr
E0603 04:47:10.840171    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-528700 status -v=7 --alsologtostderr: exit status 7 (38.0202523s)

                                                
                                                
-- stdout --
	ha-528700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-528700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-528700-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-528700-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:46:46.747801   10748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 04:46:46.751733   10748 out.go:291] Setting OutFile to fd 1048 ...
	I0603 04:46:46.774545   10748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:46:46.774545   10748 out.go:304] Setting ErrFile to fd 1020...
	I0603 04:46:46.774545   10748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:46:46.788997   10748 out.go:298] Setting JSON to false
	I0603 04:46:46.788997   10748 mustload.go:65] Loading cluster: ha-528700
	I0603 04:46:46.790551   10748 notify.go:220] Checking for updates...
	I0603 04:46:46.790951   10748 config.go:182] Loaded profile config "ha-528700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:46:46.790951   10748 status.go:255] checking status of ha-528700 ...
	I0603 04:46:46.791608   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:46:49.002199   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:46:49.002270   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:46:49.002270   10748 status.go:330] ha-528700 host status = "Running" (err=<nil>)
	I0603 04:46:49.002270   10748 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:46:49.003304   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:46:51.230902   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:46:51.230902   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:46:51.231054   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:46:53.838083   10748 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:46:53.850114   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:46:53.850114   10748 host.go:66] Checking if "ha-528700" exists ...
	I0603 04:46:53.863821   10748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 04:46:53.863821   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700 ).state
	I0603 04:46:56.047497   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:46:56.047582   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:46:56.047650   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700 ).networkadapters[0]).ipaddresses[0]
	I0603 04:46:58.667828   10748 main.go:141] libmachine: [stdout =====>] : 172.17.88.175
	
	I0603 04:46:58.667828   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:46:58.667828   10748 sshutil.go:53] new ssh client: &{IP:172.17.88.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700\id_rsa Username:docker}
	I0603 04:46:58.769610   10748 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.905668s)
	I0603 04:46:58.779158   10748 ssh_runner.go:195] Run: systemctl --version
	I0603 04:46:58.801303   10748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:46:58.827414   10748 kubeconfig.go:125] found "ha-528700" server: "https://172.17.95.254:8443"
	I0603 04:46:58.827414   10748 api_server.go:166] Checking apiserver status ...
	I0603 04:46:58.840003   10748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:46:58.878577   10748 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2000/cgroup
	W0603 04:46:58.898832   10748 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2000/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 04:46:58.912974   10748 ssh_runner.go:195] Run: ls
	I0603 04:46:58.920896   10748 api_server.go:253] Checking apiserver healthz at https://172.17.95.254:8443/healthz ...
	I0603 04:46:58.927514   10748 api_server.go:279] https://172.17.95.254:8443/healthz returned 200:
	ok
	I0603 04:46:58.929451   10748 status.go:422] ha-528700 apiserver status = Running (err=<nil>)
	I0603 04:46:58.929451   10748 status.go:257] ha-528700 status: &{Name:ha-528700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 04:46:58.929451   10748 status.go:255] checking status of ha-528700-m02 ...
	I0603 04:46:58.930422   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m02 ).state
	I0603 04:47:01.002011   10748 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 04:47:01.013416   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:01.013416   10748 status.go:330] ha-528700-m02 host status = "Stopped" (err=<nil>)
	I0603 04:47:01.013416   10748 status.go:343] host is not running, skipping remaining checks
	I0603 04:47:01.013416   10748 status.go:257] ha-528700-m02 status: &{Name:ha-528700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 04:47:01.013416   10748 status.go:255] checking status of ha-528700-m03 ...
	I0603 04:47:01.014319   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:47:03.137823   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:47:03.148302   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:03.148302   10748 status.go:330] ha-528700-m03 host status = "Running" (err=<nil>)
	I0603 04:47:03.148302   10748 host.go:66] Checking if "ha-528700-m03" exists ...
	I0603 04:47:03.149169   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:47:05.302546   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:47:05.302546   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:05.314265   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:47:07.848638   10748 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:47:07.856358   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:07.856358   10748 host.go:66] Checking if "ha-528700-m03" exists ...
	I0603 04:47:07.867420   10748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 04:47:07.867420   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m03 ).state
	I0603 04:47:09.999497   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:47:09.999497   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:10.010717   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 04:47:12.626331   10748 main.go:141] libmachine: [stdout =====>] : 172.17.89.50
	
	I0603 04:47:12.626331   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:12.638951   10748 sshutil.go:53] new ssh client: &{IP:172.17.89.50 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m03\id_rsa Username:docker}
	I0603 04:47:12.744881   10748 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8774469s)
	I0603 04:47:12.752998   10748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:47:12.789114   10748 kubeconfig.go:125] found "ha-528700" server: "https://172.17.95.254:8443"
	I0603 04:47:12.789114   10748 api_server.go:166] Checking apiserver status ...
	I0603 04:47:12.802284   10748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 04:47:12.847273   10748 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup
	W0603 04:47:12.868658   10748 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 04:47:12.880693   10748 ssh_runner.go:195] Run: ls
	I0603 04:47:12.889638   10748 api_server.go:253] Checking apiserver healthz at https://172.17.95.254:8443/healthz ...
	I0603 04:47:12.900784   10748 api_server.go:279] https://172.17.95.254:8443/healthz returned 200:
	ok
	I0603 04:47:12.901783   10748 status.go:422] ha-528700-m03 apiserver status = Running (err=<nil>)
	I0603 04:47:12.901783   10748 status.go:257] ha-528700-m03 status: &{Name:ha-528700-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 04:47:12.901783   10748 status.go:255] checking status of ha-528700-m04 ...
	I0603 04:47:12.902534   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m04 ).state
	I0603 04:47:15.113960   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:47:15.113960   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:15.113960   10748 status.go:330] ha-528700-m04 host status = "Running" (err=<nil>)
	I0603 04:47:15.113960   10748 host.go:66] Checking if "ha-528700-m04" exists ...
	I0603 04:47:15.115036   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m04 ).state
	I0603 04:47:17.267644   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:47:17.267644   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:17.267644   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m04 ).networkadapters[0]).ipaddresses[0]
	I0603 04:47:19.828850   10748 main.go:141] libmachine: [stdout =====>] : 172.17.88.156
	
	I0603 04:47:19.828850   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:19.828850   10748 host.go:66] Checking if "ha-528700-m04" exists ...
	I0603 04:47:19.851717   10748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 04:47:19.851717   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-528700-m04 ).state
	I0603 04:47:21.979528   10748 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 04:47:21.979598   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:21.979759   10748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-528700-m04 ).networkadapters[0]).ipaddresses[0]
	I0603 04:47:24.490413   10748 main.go:141] libmachine: [stdout =====>] : 172.17.88.156
	
	I0603 04:47:24.490413   10748 main.go:141] libmachine: [stderr =====>] : 
	I0603 04:47:24.501966   10748 sshutil.go:53] new ssh client: &{IP:172.17.88.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-528700-m04\id_rsa Username:docker}
	I0603 04:47:24.602209   10748 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7504789s)
	I0603 04:47:24.612709   10748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 04:47:24.637584   10748 status.go:257] ha-528700-m04 status: &{Name:ha-528700-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (73.49s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (200.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-547700 --driver=hyperv
E0603 04:51:42.697764    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 04:52:10.842834    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:53:39.505697    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-547700 --driver=hyperv: (3m20.2932654s)
--- PASS: TestImageBuild/serial/Setup (200.29s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-547700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-547700: (9.6268241s)
--- PASS: TestImageBuild/serial/NormalBuild (9.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-547700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-547700: (9.1645136s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-547700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-547700: (8.0196408s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-547700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-547700: (7.6882075s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (244.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-252800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0603 04:57:10.835642    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:58:34.045993    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 04:58:39.512529    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-252800 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m4.2259029s)
--- PASS: TestJSONOutput/start/Command (244.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-252800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-252800 --output=json --user=testUser: (7.696578s)
--- PASS: TestJSONOutput/pause/Command (7.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-252800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-252800 --output=json --user=testUser: (7.3921655s)
--- PASS: TestJSONOutput/unpause/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (38.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-252800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-252800 --output=json --user=testUser: (38.7634548s)
--- PASS: TestJSONOutput/stop/Command (38.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-234800 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-234800 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (193.8207ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c14e90eb-20fb-4d15-a148-10ce194a7fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-234800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6b9c5d7-e771-4667-a918-af86f8d7c5a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"49377fe9-00f8-49b4-90e5-b75cfb0bc7da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"203269ab-12eb-4dd5-9852-2eab57463ebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"6e1ecd35-0142-4db2-aa56-ac00c383a7dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19008"}}
	{"specversion":"1.0","id":"eb2e5ffa-96e1-4198-90c0-79e3dee4946a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10bf2752-5e6f-47e6-b0ba-be87723cdcff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:00:32.165386   15040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-234800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-234800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-234800: (1.1434734s)
--- PASS: TestErrorJSONOutput (1.35s)

                                                
                                    
x
+
TestMainNoArgs (0.18s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.18s)

                                                
                                    
x
+
TestMinikubeProfile (519.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-648100 --driver=hyperv
E0603 05:02:10.838011    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 05:03:39.504914    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-648100 --driver=hyperv: (3m11.0965534s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-648100 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-648100 --driver=hyperv: (3m21.4467288s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-648100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0603 05:07:10.848396    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.7241165s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-648100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.4305542s)
helpers_test.go:175: Cleaning up "second-648100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-648100
E0603 05:08:22.705200    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-648100: (41.3268892s)
helpers_test.go:175: Cleaning up "first-648100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-648100
E0603 05:08:39.510127    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-648100: (46.200821s)
--- PASS: TestMinikubeProfile (519.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (158.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-841900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-841900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m37.8709754s)
--- PASS: TestMountStart/serial/StartWithMountFirst (158.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-841900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-841900 ssh -- ls /minikube-host: (9.6116402s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (155.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-841900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0603 05:12:10.842482    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 05:13:39.509708    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-841900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m34.3071835s)
--- PASS: TestMountStart/serial/StartWithMountSecond (155.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.18s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-841900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-841900 ssh -- ls /minikube-host: (9.1771718s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.18s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-841900 --alsologtostderr -v=5
E0603 05:15:14.063538    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-841900 --alsologtostderr -v=5: (29.5644374s)
--- PASS: TestMountStart/serial/DeleteFirst (29.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.96s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-841900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-841900 ssh -- ls /minikube-host: (8.9499695s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.96s)

                                                
                                    
x
+
TestMountStart/serial/Stop (29.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-841900
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-841900: (29.2057808s)
--- PASS: TestMountStart/serial/Stop (29.21s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (421.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-316400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0603 05:22:10.839831    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 05:23:39.505794    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 05:25:02.721681    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-316400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m37.5601463s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 status --alsologtostderr: (23.963204s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (421.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- rollout status deployment/busybox: (3.1790428s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- nslookup kubernetes.io: (1.8728012s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-pm79t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-pm79t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-hmxqp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-316400 -- exec busybox-fc5497c4f-pm79t -- nslookup kubernetes.default.svc.cluster.local
E0603 05:27:10.841720    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.43s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (227.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-316400 -v 3 --alsologtostderr
E0603 05:28:39.508021    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-316400 -v 3 --alsologtostderr: (3m12.1679531s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 status --alsologtostderr
E0603 05:31:54.080199    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 status --alsologtostderr: (35.7211707s)
--- PASS: TestMultiNode/serial/AddNode (227.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-316400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.6795255s)
--- PASS: TestMultiNode/serial/ProfileList (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (353.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 status --output json --alsologtostderr
E0603 05:32:10.841586    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 status --output json --alsologtostderr: (35.9413728s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp testdata\cp-test.txt multinode-316400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp testdata\cp-test.txt multinode-316400:/home/docker/cp-test.txt: (9.4937143s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt": (9.5544153s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400.txt: (9.5372728s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt": (9.4799472s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400:/home/docker/cp-test.txt multinode-316400-m02:/home/docker/cp-test_multinode-316400_multinode-316400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400:/home/docker/cp-test.txt multinode-316400-m02:/home/docker/cp-test_multinode-316400_multinode-316400-m02.txt: (16.5951202s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt"
E0603 05:33:39.513042    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt": (9.4226398s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test_multinode-316400_multinode-316400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test_multinode-316400_multinode-316400-m02.txt": (9.4180887s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400:/home/docker/cp-test.txt multinode-316400-m03:/home/docker/cp-test_multinode-316400_multinode-316400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400:/home/docker/cp-test.txt multinode-316400-m03:/home/docker/cp-test_multinode-316400_multinode-316400-m03.txt: (15.8919964s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test.txt": (9.1193444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test_multinode-316400_multinode-316400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test_multinode-316400_multinode-316400-m03.txt": (9.3153281s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp testdata\cp-test.txt multinode-316400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp testdata\cp-test.txt multinode-316400-m02:/home/docker/cp-test.txt: (9.3057467s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt": (9.3228888s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400-m02.txt: (9.0950312s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt": (9.2555537s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt multinode-316400:/home/docker/cp-test_multinode-316400-m02_multinode-316400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt multinode-316400:/home/docker/cp-test_multinode-316400-m02_multinode-316400.txt: (16.0077522s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt": (9.1524131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test_multinode-316400-m02_multinode-316400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test_multinode-316400-m02_multinode-316400.txt": (9.0533295s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt multinode-316400-m03:/home/docker/cp-test_multinode-316400-m02_multinode-316400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m02:/home/docker/cp-test.txt multinode-316400-m03:/home/docker/cp-test_multinode-316400-m02_multinode-316400-m03.txt: (15.830009s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test.txt": (9.1636139s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test_multinode-316400-m02_multinode-316400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test_multinode-316400-m02_multinode-316400-m03.txt": (9.1505943s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp testdata\cp-test.txt multinode-316400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp testdata\cp-test.txt multinode-316400-m03:/home/docker/cp-test.txt: (9.0717275s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt": (9.043092s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile4262688910\001\cp-test_multinode-316400-m03.txt: (9.1517979s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt": (9.1012677s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt multinode-316400:/home/docker/cp-test_multinode-316400-m03_multinode-316400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt multinode-316400:/home/docker/cp-test_multinode-316400-m03_multinode-316400.txt: (15.9668208s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt"
E0603 05:37:10.848377    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt": (9.0978435s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test_multinode-316400-m03_multinode-316400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400 "sudo cat /home/docker/cp-test_multinode-316400-m03_multinode-316400.txt": (9.076155s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt multinode-316400-m02:/home/docker/cp-test_multinode-316400-m03_multinode-316400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 cp multinode-316400-m03:/home/docker/cp-test.txt multinode-316400-m02:/home/docker/cp-test_multinode-316400-m03_multinode-316400-m02.txt: (16.0108408s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m03 "sudo cat /home/docker/cp-test.txt": (9.0494495s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test_multinode-316400-m03_multinode-316400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 ssh -n multinode-316400-m02 "sudo cat /home/docker/cp-test_multinode-316400-m03_multinode-316400-m02.txt": (9.041795s)
--- PASS: TestMultiNode/serial/CopyFile (353.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (74.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 node stop m03: (23.8624692s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 status
E0603 05:38:39.514930    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-316400 status: exit status 7 (25.309866s)

                                                
                                                
-- stdout --
	multinode-316400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-316400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-316400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:38:22.909813   12396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-316400 status --alsologtostderr: exit status 7 (25.1212237s)

                                                
                                                
-- stdout --
	multinode-316400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-316400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-316400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 05:38:48.231503    6632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 05:38:48.240579    6632 out.go:291] Setting OutFile to fd 1156 ...
	I0603 05:38:48.241035    6632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:38:48.241035    6632 out.go:304] Setting ErrFile to fd 1456...
	I0603 05:38:48.241035    6632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 05:38:48.262306    6632 out.go:298] Setting JSON to false
	I0603 05:38:48.262306    6632 mustload.go:65] Loading cluster: multinode-316400
	I0603 05:38:48.262306    6632 notify.go:220] Checking for updates...
	I0603 05:38:48.263084    6632 config.go:182] Loaded profile config "multinode-316400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 05:38:48.263084    6632 status.go:255] checking status of multinode-316400 ...
	I0603 05:38:48.263707    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:38:50.383020    6632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:38:50.394177    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:38:50.394177    6632 status.go:330] multinode-316400 host status = "Running" (err=<nil>)
	I0603 05:38:50.394177    6632 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:38:50.395025    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:38:52.450108    6632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:38:52.450108    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:38:52.450209    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:38:54.893688    6632 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:38:54.893747    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:38:54.893747    6632 host.go:66] Checking if "multinode-316400" exists ...
	I0603 05:38:54.905314    6632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 05:38:54.905314    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400 ).state
	I0603 05:38:56.976483    6632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:38:56.976483    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:38:56.985364    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400 ).networkadapters[0]).ipaddresses[0]
	I0603 05:38:59.432857    6632 main.go:141] libmachine: [stdout =====>] : 172.17.87.47
	
	I0603 05:38:59.442694    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:38:59.442927    6632 sshutil.go:53] new ssh client: &{IP:172.17.87.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400\id_rsa Username:docker}
	I0603 05:38:59.547368    6632 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6420381s)
	I0603 05:38:59.561392    6632 ssh_runner.go:195] Run: systemctl --version
	I0603 05:38:59.583331    6632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:38:59.609863    6632 kubeconfig.go:125] found "multinode-316400" server: "https://172.17.87.47:8443"
	I0603 05:38:59.609863    6632 api_server.go:166] Checking apiserver status ...
	I0603 05:38:59.622242    6632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 05:38:59.656958    6632 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2014/cgroup
	W0603 05:38:59.671638    6632 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2014/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 05:38:59.683383    6632 ssh_runner.go:195] Run: ls
	I0603 05:38:59.689865    6632 api_server.go:253] Checking apiserver healthz at https://172.17.87.47:8443/healthz ...
	I0603 05:38:59.696749    6632 api_server.go:279] https://172.17.87.47:8443/healthz returned 200:
	ok
	I0603 05:38:59.696749    6632 status.go:422] multinode-316400 apiserver status = Running (err=<nil>)
	I0603 05:38:59.699701    6632 status.go:257] multinode-316400 status: &{Name:multinode-316400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 05:38:59.699781    6632 status.go:255] checking status of multinode-316400-m02 ...
	I0603 05:38:59.700754    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:39:01.877919    6632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:39:01.877919    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:39:01.889310    6632 status.go:330] multinode-316400-m02 host status = "Running" (err=<nil>)
	I0603 05:39:01.889310    6632 host.go:66] Checking if "multinode-316400-m02" exists ...
	I0603 05:39:01.889393    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:39:04.002635    6632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:39:04.008152    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:39:04.008152    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:39:06.481232    6632 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:39:06.481232    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:39:06.481381    6632 host.go:66] Checking if "multinode-316400-m02" exists ...
	I0603 05:39:06.492664    6632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 05:39:06.492664    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m02 ).state
	I0603 05:39:08.535515    6632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 05:39:08.535515    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:39:08.545928    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-316400-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 05:39:11.016975    6632 main.go:141] libmachine: [stdout =====>] : 172.17.94.201
	
	I0603 05:39:11.024377    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:39:11.024377    6632 sshutil.go:53] new ssh client: &{IP:172.17.94.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-316400-m02\id_rsa Username:docker}
	I0603 05:39:11.125403    6632 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6327214s)
	I0603 05:39:11.140089    6632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 05:39:11.165560    6632 status.go:257] multinode-316400-m02 status: &{Name:multinode-316400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0603 05:39:11.165560    6632 status.go:255] checking status of multinode-316400-m03 ...
	I0603 05:39:11.166007    6632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-316400-m03 ).state
	I0603 05:39:13.213476    6632 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 05:39:13.215809    6632 main.go:141] libmachine: [stderr =====>] : 
	I0603 05:39:13.215893    6632 status.go:330] multinode-316400-m03 host status = "Stopped" (err=<nil>)
	I0603 05:39:13.215893    6632 status.go:343] host is not running, skipping remaining checks
	I0603 05:39:13.215893    6632 status.go:257] multinode-316400-m03 status: &{Name:multinode-316400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (74.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (179.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 node start m03 -v=7 --alsologtostderr: (2m25.0419128s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-316400 status -v=7 --alsologtostderr
E0603 05:41:42.736946    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 05:42:10.856585    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-316400 status -v=7 --alsologtostderr: (34.7075087s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (179.93s)

                                                
                                    
x
+
TestPreload (523.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-276800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0603 05:53:39.510416    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 05:57:10.847685    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-276800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m25.5331119s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-276800 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-276800 image pull gcr.io/k8s-minikube/busybox: (8.2374325s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-276800
E0603 05:58:22.748761    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-276800: (38.6808904s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-276800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0603 05:58:39.516414    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-276800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m42.822631s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-276800 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-276800 image list: (7.0625667s)
helpers_test.go:175: Cleaning up "test-preload-276800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-276800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-276800: (41.2788328s)
--- PASS: TestPreload (523.64s)

                                                
                                    
x
+
TestScheduledStopWindows (325.5s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-686400 --memory=2048 --driver=hyperv
E0603 06:02:10.860705    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 06:03:39.515106    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
E0603 06:05:14.100260    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-686400 --memory=2048 --driver=hyperv: (3m13.2109311s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-686400 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-686400 --schedule 5m: (10.7240626s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-686400 -n scheduled-stop-686400
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-686400 -n scheduled-stop-686400: exit status 1 (10.0403079s)

                                                
                                                
** stderr ** 
	W0603 06:05:32.367455    5576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-686400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-686400 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.4841568s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-686400 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-686400 --schedule 5s: (10.5075166s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-686400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-686400: exit status 7 (2.322433s)

                                                
                                                
-- stdout --
	scheduled-stop-686400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:07:02.395803    9936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-686400 -n scheduled-stop-686400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-686400 -n scheduled-stop-686400: exit status 7 (2.2741581s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:07:04.722799    3364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-686400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-686400
E0603 06:07:10.851919    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-686400: (26.928787s)
--- PASS: TestScheduledStopWindows (325.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1076.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1432650931.exe start -p running-upgrade-647400 --memory=2200 --vm-driver=hyperv
E0603 06:08:39.518534    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-754300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1432650931.exe start -p running-upgrade-647400 --memory=2200 --vm-driver=hyperv: (8m13.1025822s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-647400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0603 06:17:10.850298    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-647400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m37.7609909s)
helpers_test.go:175: Cleaning up "running-upgrade-647400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-647400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-647400: (1m4.9548848s)
--- PASS: TestRunningBinaryUpgrade (1076.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-647400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-647400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (276.2244ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-647400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 06:07:33.945386   11480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (874.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2657585056.exe start -p stopped-upgrade-398500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2657585056.exe start -p stopped-upgrade-398500 --memory=2200 --vm-driver=hyperv: (7m37.1994318s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2657585056.exe -p stopped-upgrade-398500 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2657585056.exe -p stopped-upgrade-398500 stop: (35.2784659s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-398500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0603 06:21:54.116122    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
E0603 06:22:10.877287    7364 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-402100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-398500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m22.0255552s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (874.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-398500
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-398500: (10.4781721s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.48s)

                                                
                                    

Test skip (30/200)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-754300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-754300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 3700: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-754300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-754300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0369827s)

                                                
                                                
-- stdout --
	* [functional-754300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:11:25.716767    9300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 04:11:25.718818    9300 out.go:291] Setting OutFile to fd 1060 ...
	I0603 04:11:25.719756    9300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:11:25.719756    9300 out.go:304] Setting ErrFile to fd 708...
	I0603 04:11:25.719756    9300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:11:25.752235    9300 out.go:298] Setting JSON to false
	I0603 04:11:25.752909    9300 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2313,"bootTime":1717410772,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 04:11:25.752909    9300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 04:11:25.758911    9300 out.go:177] * [functional-754300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 04:11:25.761322    9300 notify.go:220] Checking for updates...
	I0603 04:11:25.767899    9300 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:11:25.768524    9300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 04:11:25.771184    9300 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 04:11:25.775510    9300 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 04:11:25.780074    9300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 04:11:25.783760    9300 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:11:25.785072    9300 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-754300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-754300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0116622s)

                                                
                                                
-- stdout --
	* [functional-754300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19008
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 04:11:27.377017    7932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 04:11:27.380525    7932 out.go:291] Setting OutFile to fd 1248 ...
	I0603 04:11:27.381275    7932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:11:27.381275    7932 out.go:304] Setting ErrFile to fd 1244...
	I0603 04:11:27.381275    7932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 04:11:27.404382    7932 out.go:298] Setting JSON to false
	I0603 04:11:27.406128    7932 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2315,"bootTime":1717410772,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0603 04:11:27.406128    7932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 04:11:27.415168    7932 out.go:177] * [functional-754300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 04:11:27.418538    7932 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0603 04:11:27.417868    7932 notify.go:220] Checking for updates...
	I0603 04:11:27.421241    7932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 04:11:27.423845    7932 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0603 04:11:27.426752    7932 out.go:177]   - MINIKUBE_LOCATION=19008
	I0603 04:11:27.429673    7932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 04:11:27.432921    7932 config.go:182] Loaded profile config "functional-754300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 04:11:27.432921    7932 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard